text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Vector –Matrix Inner Product with Computer Shader and C++ AMP
Large vector-matrix inner products by the GPU are 250 times faster than straight forward CPU implementations on my PC. Using C++ AMP or a Compute Shader the GPU realized a performance of over 30 gFLOPS. That is a huge increase, but my GPU has a “computational power” (whatever that may be) of 1 teraFLOP, and 30 gFLOPS is still a long way from 1000 gFLOPS.
This article presents a general architectural view of the GPU and some details of a particular exemplar: the Ati Radeon HD5750. Then code examples follow that show various approaches to large vector-matrix products. Of course the algorithms at the end of the article are the fastest. It is also the simplest.
Unified View of the GPU Architecture
Programming the GPU is based on an architectural view of the GPU. The purpose of this architectural view is to provide a unified perspective on GPUs from various vendors, hence with different hardware setup. It is this unified architecture that’s being programmed against using DirectX11. A good source of information on Direct Compute and Compute Shaders is the Microsoft Direct Compute BLog. The architecture described below is based on information from Chas Boyd’s talk at PDC09, as published on Channel9. Of course, this blog post only presents some fragments of the information found there.
A GPU is considered to be build from a number of SIMD cores. SIMD means: Single Instruction Multiple Data. By the way, the pictures below are hyperlinks to their source.
The idea is that a single instruction is executed on a lot of data, in parallel. The SIMD processing unit is particularly fit for “data parallel” algorithms. A GPU may consist of 32 SIMD cores (yes, the image shows 40 cores) that access memory with 32 floats at a time (128 bit bus width). Typically the processor runs at 1Ghz, and has a (theoretical) computational power of about 1 TeraFLOP.
A SIMD core uses several kinds of memory:
- 16 Kbyte of (32-bit) registers. Used for local variables
- 8 Kbyte SIMD shared memory, L1 cache.
- L2 cache
The GPU as a whole has typically 1Gb of general RAM. Memory access bandwidth is typically of order 100GBit/s.
Programming Model
A GPU is programmed using a Compute Shader or C++ AMP. Developers can write compute shaders in HLSL (Looks like C) to be executed on the GPU. AMD is a C++ library. The GPU can run up to 1024 threads per SIMD. A thread is a line of execution through code. The SIMD shared memory is shared among the threads of a SIMD. It is programmable in the sense that you can declare variables (arrays) as “groupshared” and they will be stored in the Local Data Share. Note however, that over-allocation will spill the variables to general RAM, thus reducing performance. Local variables in shader code will be stored in registers.
Tactics
The GPU architecture suggests programming tactics that will optimize performance.
- Do your program logic on the CPU, send the data to the GPU for operations that apply to (about) all of the data and contain a minimal number of alternative processing paths.
- Load as much data as possible into the GPU general RAM, so as to prevent the GPU waiting for data from CPU memory.
- Declare registers to store isolated local variables
- Cache data that you reuse in “groupshared” Memory. Don’t cache data you don’t reuse. Keep in mind that you can share cached data among the threads of a single group only.
- Use as much threads as possible. This requires you use only small amounts of cache memory per thread.
- Utilize the GPU as efficiently as possible by offering much more threads to it than it can process in a small amount of time.
- Plan the use of threads and memory ahead, then experiment to optimize.
Loading data from CPU memory into GPU memory passes the PCIe bridge which has a bandwidth, typically of order 1GBit/s; that is, it is a bottleneck.
So, you really like to load as much data onto GPU memory before executing your code.
The trick in planning your parallelism is to chop up (schedule, that is J ) the work in SIMD size chunks. You can declare groups of threads; the size of the groups and the number of groups. A group is typically executed by a single SIMD. To optimize performance, use Group Shared Memory, and set up the memory consumption of your thread group so it will fit into the available Group Shared Memory. That is: restrict the number of threads per group, and make sure you have a sufficient number of groups. Thread groups are three dimensional. My hypothesis at this time is that it is best to fit the dimensionality of the thread groups to match the structure of the end result. More about this below. Synchronization of the threads within a thread group flushes the GroupShared Memory of the SIMD.
A register typically has a lifetime that is bound to a thread. Individual threads are member of several groups – depending on how you program stuff. So, intermediate results aggregated by thread groups can be stored in registers.
Does My ATI Radeon HD5750 GPU Look Like This Architecture… A Bit?
The picture below (from here) is of the HD5770, which has 10 SIMD cores, one more than the HD5750.
What do we see here?
- SIMD engines. We see 10 cores for the HD5770, but there are 9 in the HD5750. Each core consists of 16 red blocks (streaming cores) and 4 yellow blocks (texture units).
- Registers (light red lines between the red blocks).
- L1 Textures caches, 18Kbyte per SIMD.
- Local Data Share, 32 Kbyte per SIMD.
- L2 caches, 8 Kbyte each.
Not visible is the 1Gb general RAM.
The processing unit runs at 700Mhz, memory runs at 1,150Mhz. Over clocking is possible however. The computational power is 1,008 TeraFLOP. Memory bandwidth is 73.6 GBit/s.
So, my GPU is quite a lot less powerful than the reference model. At first, a bit disappointing but on the other hand: much software I write for this GPU cannot run on the PCs of most people I know – their PCs are too old.
Various Approaches to Vector-Matrix Multiplication
Below we will see a number of approaches to vector-matrix multiplication discussed. The will include measurements of time and capacity. So, how do we execute the code and what do we measure?
Times measured include a number of iterations that each multiply the vector by the matrix. Usually this is 100 iterations, but fast alternatives get 1000 iterations. The faster the alternative, the more we are interested in variance and overhead.
Measurements:
- Do not include data upload and download times.
- Concern an equal data load, 12,288 input elements if the alternative can handle it.
- Correctness check; computation is also performed by CPU code, reference code.
- Run a release build from Visual Studio, without debugging.
- Allow AMP programs get a warming up run.
Vector-Matrix Product by CPU: Reference Measurement
In order to determine the performance gain, we measure the time it takes the CPU to perform the product. The algorithm, hence the code is straightforward:
In this particular case rows = cols = 12,288. The average over 100 runs is 2,452 ms, or 2.45 seconds. This amounts to a time performance of 0.12 gFLOPS (giga FLOPS: FLoating point Operations Per Second). We restrict floating point operations to addition and multiplication (yes, that includes subtraction and division). We calculate gFLOPS as:
2 / ms x Rows / 1000 x Cols / 1000, where ms is the average time in milliseconds.
The result of the test is correct.
Parallel Patterns Library
Although this blog post is about GPU performance, I took a quick look at PPL performance. We then see a performance gain of a factor 2, but the result is incorrect, that is, the above code leads to indeterminacy in a parallel_for loop. I left it at that, for now.
Matrix-Matrix Product
We can of course, view a vector as a matrix with a single column. The C++ AMP documentation has a running code example of a matrix multiplication. There is also an accompanying compute shader analog.
AMP
To the standard AMP example I’ve added some optimizing changes, and measured the performance. The AMP code look like this:
Here: amp is an alias for the Concurrency namespace. The tile size TS has been set to 32, which is the maximum; the product of the dimensional extents of a compute domain should not exceed 1024. The extent of the compute domain has been changed to depend on B, the matrix, instead of the output vector. The loop that sums element products has been unrolled in order to further improve performance.
As mentioned above, we start with a warming up. As is clear from the code we do not measure data transport to and from the GPU. Time measurements are over 100 iterations. The average run time obtained is 9,266.6 ms, hence 0.01 gFLOPS. The result after the test run was correct.
The data load is limited to 7*1024 = 7,168; that is 8*1024 is unstable.
Compute Shader
The above code was adapted to also run as a compute shader. The code looks like this:
The variables Group_SIZE_X and Group_SIZE_Y are passed into the shader at compile time, and are set to 32 each.
Time measurements are over 100 iterations. The average run time obtained is 11,468.3 ms, hence 0.01 gFLOPS. The result after the test run was correct. The data load is limited to 7*1024 = 7,168; that is 8*1024 is unstable.
Analysis
The performance of the compute shader is slightly worse that the AMP variant. Analysis with the Visual Studio 11 Concurrency Visualizer shows that work by the GPU in case of the compute shader program is executes in small spurts, separated by small periods of idleness, whereas in the AMP program the work is executed by the GPU in one contiguous period of time.
Nevertheless, performance is bad, worse than the CPU alternative. Why? Take a look at the picture below:
For any value of t_idx.global[0] – which is based on the extent of the matrix- that is unequal to zero, vector A does not have a value. So, in fact, if N is the number of elements in the vector, we do O( N3)retrievals but only O(N2) computations. So, we need an algorithm that is based on the extent of a vector, say the output vector.
Vector-Matrix Product
Somehow, it proved easier to develop the vector-matrix product as a compute shader. This is in spite of the fact that unlike AMP, it is not possible (yet?) to trace a running compute shader in Visual Studio. The idea of the algorithm is that we tile the vector in one dimension, and the matrix in two, thus obtaining the effect that the vector tile can be reused in multiplications with the matrix tile.
Compute Shader
A new compute shader was developed. This compute shader caches vector and matrix data in Group Shared memory. The HLSL code looks like this:
This program can handle much larger amounts of data. Indeed, this program runs problem free for a vector of 12,288 elements and a total data size of 576 Mbyte. Using an input vector of 12,288 elements, with total data size of 576 Mbyte. The time performance is 10.3 ms per run, averaged over 1,000 runs, which amounts to 29.3 gFLOPS. The result of the final run was reported to be correct.
AMP
In analogy to the compute shader above I wrote (and borrowed 🙂 ) a C++ AMP program. The main method looks like this:
The matrix is a vector with size * size elements. He tile size was chosen to be 128, because that setting yields optimal performance. The program was run on an input vector of 12,288 elements again, with total data size of 576 Mbyte. The time performance is 10.1 ms per run, averaged over 1000 runs, which amounts to 30.0 gFLOPS. The result of the final run was reported to be correct.
Analysis
We see here that the performance has much improved. When compared to the reference case, we can now do it (in milliseconds) 2,452 : 10.1 = 243 : 1, hence 243 times faster.
Simpler
Then, I read an MSDN Magazine article on AMP tiling by Daniel Moth, and it reminded me that caching is useless if you do not reuse the data. Well, the above algorithm does not reuse the cached matrix data. So I adapted the Compute Shader program to retrieve matrix data from central GPU memory directly. The HLSL code looks like this:
Note the tileSize of 512(!). This program was run for a vector of 12,288 elements and a total data size of 576 Mbyte. The time performance is again 10.3 ms for a multiplication which amounts to 29,3 gFLOPS (averaged over 1000 runs). The result of the final run was reported to be correct. So, indeed, caching the matrix data does not add any performance improvement.
AMP
For completeness, the AMP version:
Time performance is optimal for a tile size of 128, in case the number of vector elements is 12,288. We obtain an average run time of 9.7 ms (averaged over 1,000 runs), and a corresponding 31.1 gFLOPS. The result of the final run was correct. This program is 2452 / 9.7 = 252.8 times as fast as the reference implementation.
Conclusions
Developing an algorithm for vector-matrix inner product has demonstrated comparable performance for Compute Shaders and AMP, but much better tooling support for AMP: we can step through AMP code while debugging, and the Concurrency Visualizer has an AMP line. This better tool support helped very well in analyzing performance of a first shot at the algorithm. The final algorithm proved over 250 times faster than a straight forward CPU program for the same functionality.
Detailed knowledge of the GPU architecture, or the hardware model, proved of limited value. When trying to run the program with either the maximum nr of threads per group, or the maximum amount of data per Group Shared Memory, I ran into parameter value limits, instabilities, performance loss, and incorrect results. I guess, you will have to leave the detailed optimization to the GPU driver and to the AMP compiler.
One question keeps bothering me though: Where is my TeraFLOP?
I mean, Direct Compute was introduced with the slogan “A teraFLOP for every one of us”, AMP is built on top of Direct Compute, and my GPU has a computational power of 1.08 TeraFLOP. Am I not ‘one of us’? | https://thebytekitchen.com/2012/05/ | CC-MAIN-2017-39 | en | refinedweb |
This article appears in the Third Party Products and Tools section. Articles in this section are for the members only and must not be used to promote or advertise products in any way, shape or form. Please report any spam or advertising.
This EF/MSSQL combination discussed here..
Insert
Refresh
DeleteAll
Let's have a look at how things can be done with the help of this framework.
Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs. So the testing scenario described here:
[TestFixture, TestsOn(typeof(PersonRepository))]
[Metadata("NDbUnit Quickstart URL",
"")]
[Description("Uses the NDbUnit library to provide test data to a local database.")]
public class PersonRepositoryFixture
{
#region Constants
private const string XmlSchema = @"..\..\TestData\School.xsd";
#endregion // Constants
#region Fields
private SchoolEntities _schoolContext;
private PersonRepository _personRepository;
private INDbUnitTest _database;
#endregion // Fields
#region Setup/TearDown
[FixtureSetUp]
public void FixtureSetUp()
{
var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;
_database = new SqlDbUnitTest(connectionString);
_database.ReadXmlSchema(XmlSchema);
var entityConnectionStringBuilder = new EntityConnectionStringBuilder
{
Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",
Provider = "System.Data.SqlClient",
ProviderConnectionString = connectionString
};
_schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);
_personRepository = new PersonRepository(this._schoolContext);
}
[FixtureTearDown]
public void FixtureTearDown()
{
_database.PerformDbOperation(DbOperationFlag.DeleteAll);
_schoolContext.Dispose();
}
....
NDbUnit
SchoolEntities
PersonRepository:
_database
INdUnitTest
private void InsertTestData(params string[] dataFileNames)
{
_database.PerformDbOperation(DbOperationFlag.DeleteAll);
if (dataFileNames == null)
{
return;
}
try
{
foreach (string fileName in dataFileNames)
{
if (!File.Exists(fileName))
{
throw new FileNotFoundException(Path.GetFullPath(fileName));
}
_database.ReadXml(fileName);
_database.PerformDbOperation(DbOperationFlag.InsertIdentity);
}
}
catch
{
_database.PerformDbOperation(DbOperationFlag.DeleteAll);
throw;
}
} but not the least, we need to provide the required test data in XML form. A snippet for data from the People table might look like this, for example:
People
<?xml version="1.0" encoding="utf-8" ?>
<School xmlns="">
<Person>
<PersonID>1</PersonID>
<LastName>Abercrombie</LastName>
<FirstName>Kim</FirstName>
<HireDate>1995-03-11T00:00:00</HireDate>
</Person>
<Person>
<PersonID>2</PersonID>
<LastName>Barzdukas</LastName>
<FirstName>Gytis</FirstName>
<EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>
</Person>
<Person>
...:
private const string People = @"..\..\TestData\School.People.xml";
...
[Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")]
public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames()
{
InsertTestData(People);")]
[DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")]
public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames()
{
InsertTestData(People);
List<string> names =
_personRepository.GetNameList(NameOrdering.Normal);
Assert.Count(34, names);
Assert.AreEqual("Alexandra Walker", names.First());
Assert.AreEqual("Yan Li", names.Last());
}
[Test, TestsOn("PersonRepository.AddPerson")]
public void AddPerson_CalledOnce_IncreasesCountByOne()
{
InsertTestData(People);
int count = _personRepository.Count;
_personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });
Assert.AreEqual(count + 1, _personRepository.Count);
}
[Test, TestsOn("PersonRepository.RemovePerson")]
public void RemovePerson_CalledOnce_DecreasesCountByOne()
{
InsertTestData(People);
int count = _personRepository.Count;
_personRepository.RemovePerson(new Person { PersonID = 33 });
Assert.AreEqual(count - 1, _personRepository.Count);
} mimic a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller:
[Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")]
[Row(null, typeof(ArgumentNullException))]
[Row("", typeof(ArgumentException))]
[Row("NotExistingCourse", typeof(ArgumentException))]
public void GetCourseMembers_WithGivenVariousInvalidValues_Throws
(string courseTitle, Type expectedInnerExceptionType)
{
var exception = Assert.Throws<RepositoryException>(() =>
_personRepository.GetCourseMembers(courseTitle));
Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException);
}:
[Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")]
public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents()
{
InsertTestData(People, Course, Department, StudentGrade);
List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");
Assert.Count(4, persons);
Assert.ForAll(
persons,
@p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),
"Person has none of the expected IDs.");
}.
InsertTestData() preparation. | https://www.codeproject.com/script/Articles/View.aspx?aid=529830 | CC-MAIN-2017-39 | en | refinedweb |
I'm trying to build a method that pulls an array of arrays, much like nested loops in the view.
I am trying to build a method on my User model that does this:
@past_parties = User.parties
<%= @past_parties.each do |past_party| %>
<%= past_party.guests("email").uniq.each do |guest| %>
<%= guest.name %> <%= guest.email %>
<% end %>
<% end %>
class User < ActiveRecord::Base
has_many :hosts, dependent: :destroy
has_many :parties, through: :hosts
def past_guests
self.parties.guests
end
end
class Host < ActiveRecord::Base
belongs_to :user
has_many :parties, dependent: :destroy
has_many :guests, through: :parties
end
class Party < ActiveRecord::Base
belongs_to :host
has_many :guests, dependent: :destroy
end
class Guest < ActiveRecord::Base
belongs_to :party
end
undefined method `guests' for #<ActiveRecord::Associations::CollectionProxy []>
The problem is that you're trying to access an
array from another
array:
self.parties.guests
self.parties returns an
#<ActiveRecord::Associations::CollectionProxy []>, so if you want to get the
guests of the
parties you have to loop over the elements.
But since you want only the guests, you can simply change your user class to:
class User < ActiveRecord::Base has_many :hosts, dependent: :destroy has_many :parties, through: :hosts has_many :guests, through: :parties # Call your guests def past_guests self.guests end end | https://codedump.io/share/OAn77xEPwo4W/1/rails-4-multiple-ruby-arrays-with-nested-attributes | CC-MAIN-2017-39 | en | refinedweb |
Different types of Caching Part 1
Why caching needed?
- If many user are trying to accessing the site, it means your server has so many requests by the user, If every request hit the server for the response then it will lead to performance issues.
- For example,if a page may contain some static information in your web site, in this scenario we can cache those content and don’t force your server to get them from data base in every request.This will increase your perfomance.
Advantage of Caching
- Reduce Database and hosting server round-trips
- Reduce network traffic
- Improve performance
Remember while using Cache :
- While caching of dynamic contents that change frequently, set minimum cache–expiration time.
- Avoid caching for contents that are not accessing frequently.
Output Cache Filter :
It’s used to cache the data that is output of an action method.In default, this will cache the data upto 60 seconds.After 60 seconds, Asp.Net MVC will execute the action method again and start caching the output again.
Let we see with example..
Controller :
public class OutputCachingController : Controller
{
[OutputCache(Duration = 20, VaryByParam = “none”)]
public ActionResult Index()
{
ViewBag.CurrentDateTime = DateTime.Now.ToString();
return View();
}
}
View :
@{ ViewBag.Title = “Index”;}
<h2>Index</h2>
<h1>@ViewBag.CurrentDateTime</h1>
Run the application, first time it will show the output like below
up to 20 seconds it will show the same time in the browser without changing after we refresh the page because it will not call the action method upto 20 seconds ( so no new time updated in the viewbag)
VaryByParam : This property enables you to create different cached versions of the content when a form parameter or query string parameter varies.
VaryByParam = “aadharsh”
If it find records matching “aadharsh” string then a cache will be created,if paremeter/querystring changes than it will replace the old one with new
content
Let we discuss about Donut caching and Donut Hole caching in Part 2 | https://dotnethelpers.wordpress.com/2013/12/ | CC-MAIN-2017-39 | en | refinedweb |
From of that contract, so I can generate something human readable from it.And perhaps a validator. And some client stubs. Maybe some some test cases.Diagnostic tools. Etc.
What is the the alternative to describing your services?How is anyone going to write code to use these services, if they don't know where available to us.
My thoughts here are really just an extension to my thoughts on data serialization.Services are just the next level of thing that need to be meta-described.
Several folks have pointed out WADL (Web Application Description Language)as a potential answer, but it has at least one hole:it doesn't have a way of describing non-XML data usedas input or output. For example, JSON. It certainly is simpler and more direct than WSDL, so it does have that going for it.
All in all, good thoughts all around, but we have more work to do,more proof to provide. And by more work, I don't mean getting a handful of Redmonkers have been starting to web publish video interviews along with their usual audio interviews.Coté seems to be doing most (all?) of the work, and you can catch these as he releases them on his blog.
I like to see people experimenting with new technology, and 'upgrading' from audio to video soundslike a fun experiment (pardon the pun). But it doesn't work for me.
My issues:
There really isn't that much 'extra' in a video interview, over just the audio.You get to see faces. You get to see some body language. Maybe a picture or two.
The idea of watching an interview means I have to have two senses trained mice. Where's the audio? It ain't there.
Nathan Harrington has a number of articles up at developerWorks, such as"Monitor your Linux computer with machine-generated music"which discuss ways developers can use audio in their computing environment.
This is good stuff, and we need more of it.
I would be remiss in not pointing out here that audio feedback like this is only in the past. I've done it as well, a decade ago, when I was using a programming environment that I was able to easily reprogram: Smalltalk.
But audio usage in development environments is not yet mainstream.There's lots of research to be done here:
What are the best sound palettes to use: audio clips, midi tones,short midi sequences, percussion vs tones?
How should we take advantage of other audio aspects like ETag support. :-)
A number of people seemed to read into my post that ETags are a cause of Twitter's performance problems.I'd be the first to admit that such a proposition is a bit of a stretch. ETags are no panacea, and in fact you'll obviously have to write more code to handle them correctly. Harder even, if you're it never asks the server for them again, the app will come up all the quicker.
Good stuff to know, and take advantage of if you can.
For more specific information about our essential protocol, HTTP 1.1, see RFC 2616.It's available in multiple formats, here:.
In"Lesson learned",my colleague Robert Berry recounts 'losing' a blog post he wasediting. Not the first time I've heard this recently. I thought I'd document my process of creating blog posts, in caseit's of any use to anyone. Because I don't lose blog posts.
My secret: I use files.
Although many blogging systems let you edit your blog posts 'online',and even let you save them as drafts, I don't actually go into myblogging system to enter a blog post, until it's complete. Theprocess is:
Create a new blog entry by going into the Documents/blog-posts folderin my home directory of my primary computer, and creatinga new file, the name of which will be the title of the blog post. The 'extension' ofthe file is .html.
Edit the blog post in html, in a plain old text editor.
While editing, at some point, double click on the file in my file system browser (Explorer, PathFinder, Nautilus, etc)to preview it in a web browser.
Churn on the edit / proof-read-in-a-web-browser cycle, for hours or days.
Ready to post? First, check all links.
Surf over to blog editing site, enter the body of the post into the text editorvia the clipboard, set the title, categories / links, etc.
Preview the post on the blog editing site. Press the "Publish" button.
Move the file with the blog post from Documents/blog-posts toDocuments/blog-posts/posted .
HTML TextAreas are an extremely poor replacementfor a decent text editor. Using HTML is handy, since some (most?) blogging systems will accept it as input, and you can preview it yourself with your favorite web browser. Saving the files,even after finished posting, is a convenient backup mechanism, should you ever lose yourentire blog.
Besides these obvious advantages, I noticed some behaviours of other blogging systems thatI really didn't like, when saving drafts of posts 'online':
On one system I used, the title saved with the first draft was used as the slug of theblog URL. Even if I later changed the title, the slug remained some abbreviated version ofthe first saved title. Ick.
On one system I used, tags I saved with a post ended up showing up in the global list of tags on the blog. Even if there weren't any published posts that had used that tag. Ick.
I should note that I also have a directory Documents/blog-posts/unused for postswhich I've started, and decided not to post. The "island of misfit blog posts", as it were,but "unused" was shorter.
There you have it! Since you religiously backup files on your primary computeryou'll have no concern about ever losing a blog post again!
Some.
The performance of Twitter as of late has been abysmal. I'm getting tired of seeing tweets like"Wondering what happened to my last 5 tweets"and"2/3 of the updates from Twitterrific never post for me. Is this normal?".I'm especially tired of seeing that darned cat!
Pssst! I don't think the cat is actually helping! Maybe you should get himaway from your servers.
Here's a fun question to ask: do you support ETags?
In order to test whether Twitter is doing any of the typical sorts of caching that it could,via ETag or Last-Modified processing,I wrote a small program to issue HTTP requests with the relevant headers, which will indicatewhether the server is taking advantage of this information. The program, http-validator-test.py, is below.
First, here are the results of targetting :
$ http-validator-test.py no extra headers200 OK; Content-Length: 15175; Last-Modified: Fri, 18 May 2007 01:41:57 GMT; ETag: "60193-3b47-b04e2340"Passing header: If-None-Match: "60193-3b47-b04e2340"304 Not Modified; Content-Length: None; Last-Modified: None; ETag: "60193-3b47-b04e2340"Passing header: If-Modified-Since: Fri, 18 May 2007 01:41:57 GMT304 Not Modified; Content-Length: None; Last-Modified: None; ETag: "60193-3b47-b04e2340"
The first two lines indicate no special headers were passed in the response, and that a 200 OK response was returned with the specified Last-Modified and ETag headers.
The next two lines show an If-None-Match header was sent with the request,indicating to only send the content if it's ETag doesn't match the value passed. It does match, so a 304 Not Modifiedis returned instead, indicating no content will be sent down (it hasn't changed since you last asked for it).
The last two lines show an If-Modified-Since header was sent with the request,indicating to only send the content if it's last modified date is later than the value specified. It's not later, so a 304 Not Modified OK; Content-Length: 26491; Last-Modified: None; ETag: "a246e2e41e13726b7b8f911995841181"Passing header: If-None-Match: "a246e2e41e13726b7b8f911995841181"200 OK; Content-Length: 26504; Last-Modified: None; ETag: "1ef9e784fa85059db37831c505baea87"Passing header: If-Modified-Since: None200 OK; Content-Length: 26503; Last-Modified: None; ETag: "2ba91b02f418ed74e316c94c438e3788"
Rut-roh. Full content sent down with every request. Probably worse, generated with every request. In Ruby. Also note that no Last-Modified header is returned at all, and different ETagis tweet arrived' listed for every tweet. That's icky.
But poke around some more, peruse the gorgeous markup. Make sure you scroll right, to take in some of the long, duplicated, inline scripts. Breathtaking!
There's a lot of cleanup that could happen here. But let me get right to the point. There's fix itself. To be posted later. If you want part of the surprise ruined, Josh twittered after reading my mind.
Here's the program I used to test the HTTP cache validator headers: http-validator-test.py
#!/usr/bin/env python #-------------------------------------------------------------------- # do some ETag and Last-Modified tests on a url #-------------------------------------------------------------------- import sys import httplib import urlparse #-------------------------------------------------------------------- def sendRequest(host, path, header=None, value=None): headers = {} if (header): print "Passing header: %s: %s" % (header, value) headers[header] = value else: print "Passing no extra headers" conn = httplib.HTTPConnection(host) conn.request("GET", path, None, headers) resp = conn.getresponse() stat = resp.status etag = resp.getheader("ETag") lmod = resp.getheader("Last-Modified") clen = resp.getheader("Content-Length") print "%s %s; Content-Length: %s; Last-Modified: %s; ETag: %s" % ( resp.status, resp.reason, clen, lmod, etag ) print return resp #-------------------------------------------------------------------- if (len(sys.argv) <= 1): print "url expected as parameter" sys.exit() x, host, path, x, x, x = urlparse.urlparse(sys.argv[1], "http") resp = sendRequest(host, path) etag = resp.getheader("ETag") date = resp.getheader("Last-Modified") resp = sendRequest(host, path, "If-None-Match", etag) resp = sendRequest(host, path, "If-Modified-Since", date)
Update - 2007/05/17
Duncan Cragg pointed out that I had beentesting the Date header, instead of the Last-Modified header. Whoops, that was dumb.Thanks Duncan. Luckily, it didn't change the results of the tests (the status codesanyway). The program above, and the output of the program have been updated.
Duncan, btw, has a great series of articles on REST on his blog, titled"The REST Dialog".
In addition, I didn't reference the HTTP 1.1 spec, RFC 2616, for folks wanting to learnmore about the mysteries of our essential protocol. It's available in multiple formats,here:.rospect on. Basically just like Java introspection and reflection calls,to examine the shape of classes, and the state of objects, dynamically.Only with richer semantics. And frankly, just easier, if I remember correctly.
Anyhoo, for the web services were we writing, we constrained the data beingpassedand turn it into an instance of a modelled class fairly easily. Generically.For all our modelled classes. With one piece of code.
Automagic serialization.
One simplification that helped was that we greatly constrained the types of 'features' (what EMF non-users would call attributes or properties) of a class; it turned on documenting it, right? And what, you were going to do it by hand?
Generating machine-readable documentation of your web service data; ie, XML schema.I know you weren't going to write that by hand. Tell me you weren't going to write that by hand. Actually, admit it, you probably weren't data marshalling. Again, JavaScript is an obvious target language here.
Generating database schema and access code, if you want to go hog wild.
If it's not obvious, I'm sold on this modelling stuff. At least lightweight versions thereof.
So I happened to be having a discussion with a colleague the other day about using software modelling toa new class, I'd go into a class browser, and fill in a template like:
Number subclass: #Fraction instanceVariableNames: 'numerator denominator' classVariableNames: '' poolDictionaries: ''
This is a class definition. However, literally, it's a message send.A message sent to a class (Number) to create a subclass (Fraction)with two instance variables (numerator and denominator).
I don't recall anyone who ever bothered to learn Smalltalkhaving made claims that it wasn't modular. So I don't think havinglanguage level modularity features is a neccessity formaking the language usage modular.
My reference to Smalltalk isn't entirely spurious given the recent news of Dan Ingall's Project Flair.As Tom Waits would 'sing' ... "What's he building in there?".
This leads me to a number of questions:
Could we build a set of conventions around package/namespace/class/method it via the wii instead. Next time.
Both Ward and Kent are discussing some of the intrinsic qualities and mind-states of programmers.Interesting stuff. The kind of stuff we all know, but never really think about too much.
I ran into another one of these intrinsic qualities the other night when I attended the. | https://www.ibm.com/developerworks/mydeveloperworks/blogs/pmuellr/date/200705?maxresults=50&sortby=0&lang=en | CC-MAIN-2017-39 | en | refinedweb |
django-fab-deploy 0.7.4
Django deployment tool
django-fab-deploy is a collection of Fabric scripts for deploying and managing django projects on Debian/Ubuntu servers. License is MIT.
Please read the docs for more info.
CHANGES
0.7.4 (2012-03-01)
- django-fab-deploy now is compatible with fabric 1.4 (and require fabric 1.4);
- nginx and wsgi scripts are now compatible with upcoming django 1.4; example of django 1.4 project configuration is added to guide;
- shortcut for passing env defaults in define_host decorator;
- Ubuntu 10.04 apache restarting fix;
- config_templates/hgrc is removed;
- tests are updated for fabtest >= 0.1;
- apache_is_running function.
In order to upgrade install fabric >= 1.4 and make sure your custom scripts work.
0.7.3 (2011-10-13)
- permanent redirect from to domain.com is added to the default nginx config. Previously they were both available and this leads to e.g. authorization issues (user logged in at was not logged in at domain.com with default django settings regarding cookie domain).
0.7.2 (2011-06-14)
- Ubuntu 10.04 (lucid) initial support (this needs more testing);
- backports for Ubuntu 10.04 and 10.10;
- docs are now using default theme;
- remote django management command errors are no longer silinced;
- invoking create_linux_account with non-default username is fixed;
- define_host decorator for easier host definition;
- default DB_USER value (‘root’) is deprecated;
- default nginx config uses INSTANCE_NAME for logs.
In order to upgrade please set DB_USER to ‘root’ explicitly in env.conf if it was omitted.
0.7.1 (2011-04-21)
- DB_ROOT_PASSWORD handling is fixed
0.7 (2011-04-21)
- requirement for root ssh access is removed: django-fab-deploy is now using sudo internally (thanks Vladimir Mihailenco);
- better support for non-root mysql users, mysql_create_user and mysql_grant_permissions commands were added (thanks Vladimir Mihailenco);
- hgrc is no more required;
- ‘synccompress’ management command is no longer called during fab up;
- coverage command is disabled;
- nginx_setup and nginx_install are now available in command line by default;
- mysqldump no longer requires project dir to be created;
- home dir for root user is corrected;
- utils.detect_os is now failing loudly if detection fails;
- numerous test running improvements.
In order to upgrade from previous verions of django-fab-deploy, install sudo on server if it was not installed:
fab install_sudo
0.6.1 (2011-03-16)
- verify_exists argument of utils.upload_config_template function was renamed to skip_unexistent;
- utils.upload_config_template now passes all extra kwargs directly to fabric’s upload_template (thanks Vladimir Mihailenco);
- virtualenv.pip_setup_conf command for uploading pip.conf (thanks Vladimir Mihailenco);
- deploy.push no longer calls ‘synccompress’ management command;
- deploy.push accepts ‘before_restart’ keyword argument - that’s a callable that will be executed just before code reload;
- fixed regression in deploy.push command: ‘notest’ argument was incorrectly renamed to ‘test’;
- customization docs are added.
0.6 (2011-03-11)
- custom project layouts support (thanks Vladimir Mihailenco): standard project layout is no longer required; if the project has pip requirements file(s) and a folder with web server config templates it should be possible to use django-fab-deploy for deployment;
- git uploads support (thanks Vladimir Mihailenco);
- lxml installation is fixed;
- sqlite deployments are supported (for testing purposes).'
0.5.1 (2011-02-25)
- Python 2.5 support for local machine (it was always supported on servers). Thanks Den Ivanov.
0.5 (2011-02-23)
- OS is now auto-detected;
- Ubuntu 10.10 maverick initial support (needs better testing?);
- fabtest package is extracted from the test suite;
- improved tests;
- fab_deploy.system.ssh_add_key can now add ssh key even if it is the first key for user;
- ‘print’ calls are replaced with ‘puts’ calls in fabfile commands;
- django management commands are not executed if they are not available..
0.4.2 (2011-02-16)
- tests are included in source distribution
0.4.1 (2011-02-14)
- don’t trigger mysql 5.1 installation on Lenny
0.4 (2011-02-13)
- env.conf.VCS: mercurial is no longer required;
- undeploy command now removes virtualenv.
0.3 (2011-02-12)
- Debian Squeeze support;
- the usage of env.user is discouraged;
- fab_deploy.utils.print_env command;
- fab_deploy.deploy.undeploy command;
- better run_as implementation.
In order to upgrade from 0.2 please remove any usages of env.user from the code, e.g. before upgrade:
def my_site(): env.hosts = ['example.com'] env.user = 'foo' #...
After upgrade:
def my_site(): env.hosts = ['foo@example.com'] #...
0.2 (2011-02-09)
- Apache ports are now managed automatically;
- default threads count is on par with mod_wsgi’s default value;
- env.conf is converted to _AttributeDict by fab_deploy.utils.update_env.
This release is backwards-incompatible with 0.1.x because of apache port handling changes. In order to upgrade,
- remove the first line (‘Listen …’) from project’s config_templates/apache.config;
- remove APACHE_PORT settings from project’s fabfile.py;
- run fab setup_web_server from the command line.
0.1.2 (2011-02-07)
- manual config copying is no longer needed: there is django-fab-deploy script for that
0.1.1 (2011-02-06)
- cleaner internals;
- less constrains on project structure, easier installation;
- default web server config improvements;
- linux user creation;
- non-interactive mysql installation (thanks Andrey Rahmatullin);
- new documentation.
0.0.11 (2010-01-27)
- fab_deploy.crontab module;
- cleaner virtualenv management;
- inside_project decorator.
this is the last release in 0.0.x branch.
0.0.8 (2010-12-27)
Bugs with multiple host support, backports URL and stray ‘pyc’ files are fixed.
0.0.6 (2010-08-29)
A few bugfixes and docs improvements.
0.0.2 (2010-08-04)
Initial release.
- Author: Mikhail Korobov
- Documentation: django-fab-deploy package documentation
- Download URL:
- License: MIT license
- Requires Fabric (>=1.4.0), jinja2
- Package Index Owner: kmike
- DOAP record: django-fab-deploy-0.7.4.xml | https://pypi.python.org/pypi/django-fab-deploy | CC-MAIN-2017-39 | en | refinedweb |
:
Follow the instructions to download this book's companion files or practice files.
Download the sample content
Introduction xix
PART I: INTRODUCING MICROSOFT VISUAL C# AND MICROSOFT VISUAL STUDIO 2015
Chapter 1: Welcome to C# 3
Beginning programming with the Visual Studio 2015 environment 3
Writing your first program 8
Using namespaces 14
Creating a graphical application 17
Examining the Universal Windows Platform app 26
Adding code to the graphical application 29
Summary 32
Quick Reference 32
Chapter 2: Working with variables, operators, and expressions 33
Understanding statements 33
Using identifiers 34
Identifying keywords 34
Using variables 36
Naming variables 36
Declaring variables 37
Working with primitive data types 37
Unassigned local variables 38
Displaying primitive data type values 38
Using arithmetic operators 45
Operators and types 45
Examining arithmetic operators 47
Controlling precedence 52
Using associativity to evaluate expressions 53
Associativity and the assignment operator 53
Incrementing and decrementing variables 54
Prefix and postfix 55
Declaring implicitly typed local variables 56
Summary 57
Quick Reference 58
Chapter 3: Writing methods and applying scope 59
Creating methods 59
Declaring a method 60
Returning data from a method 61
Using expression-bodied methods 62
Calling methods 63
Applying scope 66
Defining local scope 66
Defining class scope 67
Overloading methods 68
Writing methods 68
Using optional parameters and named arguments 77
Defining optional parameters 79
Passing named arguments 79
Resolving ambiguities with optional parameters and named arguments 80
Summary 85
Quick reference 86
Chapter 4: Using decision statements 87
Declaring Boolean variables 87
Using Boolean operators 88
Understanding equality and relational operators 88
Understanding conditional logical operators 89
Short circuiting 90
Summarizing operator precedence and associativity 90
Using if statements to make decisions 91
Understanding if statement syntax 91
Using blocks to group statements 93
Cascading if statements 94
Using switch statements 99
Understanding switch statement syntax 100
Following the switch statement rules 101
Summary 104
Quick reference 105
Chapter 5: Using compound assignment and iteration statements 107
Using compound assignment operators 107
Writing while statements 108
Writing for statements 114
Understanding for statement scope 115
Writing do statements 116
Summary 125
Quick reference 125
Chapter 6: Managing errors and exceptions 127
Coping with errors 127
Trying code and catching exceptions 128
Unhandled exceptions 129
Using multiple catch handlers 130
Catching multiple exceptions 131
Propagating exceptions 136
Using checked and unchecked integer arithmetic 138
Writing checked statements 139
Writing checked expressions 140
Throwing exceptions 143
Using a finally block 148
Summary 149
Quick reference 150
PART II: UNDERSTANDING THE C# OBJECT MODEL
Chapter 7: Creating and managing classes and objects 153
Understanding classification 153
The purpose of encapsulation 154
Defining and using a class 154
Controlling accessibility 156
Working with constructors 157
Overloading constructors 158
Understanding static methods and data 167
Creating a shared field 168
Creating a static field by using the const keyword 169
Understanding static classes 169
Static using statements 170
Anonymous classes 172
Summary 174
Quick reference 174
Chapter 8: Understanding values and references 177
Copying value type variables and classes 177
Understanding null values and nullable types 183
Using nullable types 185
Understanding the properties of nullable types 186
Using ref and out parameters 187
Creating ref parameters 188
Creating out parameters 188
How computer memory is organized 190
Using the stack and the heap 192
The System.Object class 193
Boxing 194
Unboxing 194
Casting data safely 196
The is operator 196
The as operator 197
Summary 199
Quick reference 199
Chapter 9: Creating value types with enumerations
and structures 201
Working with enumerations 201
Declaring an enumeration 202
Using an enumeration 202
Choosing enumeration literal values 203
Choosing an enumeration’s underlying type 204
Working with structures 206
Declaring a structure 208
Understanding differences between structures and classes 209
Declaring structure variables 210
Understanding structure initialization 211
Copying structure variables 215
Summary 219
Quick reference 219
Chapter 10: Using arrays 221
Declaring and creating an array 221
Declaring array variables 221
Creating an array instance 222
Populating and using an array 223
Creating an implicitly typed array 224
Accessing an individual array element 225
Iterating through an array 225
Passing arrays as parameters and return values for a method 227
Copying arrays 228
Using multidimensional arrays 230
Creating jagged arrays 231
Summary 241
Quick reference 242
Chapter 11: Understanding parameter arrays 243
Overloading—a recap 243
Using array arguments 244
Declaring a params array 245
Using params object[ ] 247
Using a params array 249
Comparing parameter arrays and optional parameters 252
Summary 254
Quick reference 254
Chapter 12: Working with inheritance 255
What is inheritance? 255
Using inheritance 256
The System.Object class revisited 258
Calling base-class constructors 258
Assigning classes 259
Declaring new methods 261
Declaring virtual methods 262
Declaring override methods 263
Understanding protected access 265
Understanding extension methods 271
Summary 275
Quick reference 276
Chapter 13: Creating interfaces and defining abstract classes 277
Understanding interfaces 277
Defining an interface 278
Implementing an interface 279
Referencing a class through its interface 280
Working with multiple interfaces 281
Explicitly implementing an interface 282
Interface restrictions 283
Defining and using interfaces 284
Abstract classes 293
Abstract methods 295
Sealed classes 295
Sealed methods 295
Implementing and using an abstract class 296
Summary 302
Quick reference 303
Chapter 14: Using garbage collection and resource management 305
The life and times of an object 305
Writing destructors 306
Why use the garbage collector? 308
How does the garbage collector work? 310
Recommendations 310
Resource management 311
Disposal methods 311
Exception-safe disposal 312
The using statement and the IDisposable interface 312
Calling the Dispose method from a destructor 314
Implementing exception-safe disposal 316
Summary 325
Quick reference 325
PART III: DEFINING EXTENSIBLE TYPES WITH C#
Chapter 15: Implementing properties to access fields 329
Implementing encapsulation by using methods 329
What are properties? 331
Using properties 333
Read-only properties 334
Write-only properties 334
Property accessibility 335
Understanding the property restrictions 336
Declaring interface properties 337
Replacing methods with properties 339
Generating automatic properties 343
Initializing objects by using properties 345
Summary 349
Quick reference 350
Chapter 16: Using indexers 353
What is an indexer? 353
An example that doesn’t use indexers 353
The same example using indexers 355
Understanding indexer accessors 357
Comparing indexers and arrays 358
Indexers in interfaces 360
Using indexers in a Windows application 361
Summary 367
Quick reference 368
Chapter 17: Introducing generics 369
The problem with the object type 369
The generics solution 373
Generics vs. generalized classes 375
Generics and constraints 375
Creating a generic class 376
The theory of binary trees 376
Building a binary tree class by using generics 379
Creating a generic method 389
Defining a generic method to build a binary tree 389
Variance and generic interfaces 391
Covariant interfaces 393
Contravariant interfaces 395
Summary 397
Quick reference 397
Chapter 18: Using collections 399
What are collection classes? 399
The List<T> collection class 401
The LinkedList<T> collection class 403
The Queue<T> collection class 404
The Stack<T> collection class 405
The Dictionary<TKey, TValue> collection class 407
The SortedList<TKey, TValue> collection class 408
The HashSet<T> collection class 409
Using collection initializers 411
The Find methods, predicates, and lambda expressions 411
The forms of lambda expressions 413
Comparing arrays and collections 415
Using collection classes to play cards 416
Summary 420
Quick reference 420
Chapter 19: Enumerating collections 423
Enumerating the elements in a collection 423
Manually implementing an enumerator 425
Implementing the IEnumerable interface 429
Implementing an enumerator by using an iterator 431
A simple iterator 432
Defining an enumerator for the Tree<TItem> class by using an iterator 434
Summary 436
Quick reference 437
Chapter 20: Decoupling application logic and handling events 439
Understanding delegates 440
Examples of delegates in the .NET Framework class library 441
The automated factory scenario 443
Implementing the factory control system without using delegates 443
Implementing the factory by using a delegate 444
Declaring and using delegates 447
Lambda expressions and delegates 455
Creating a method adapter 455
Enabling notifications by using events 456
Declaring an event 456
Subscribing to an event 457
Unsubscribing from an event 457
Raising an event 458
Understanding user interface events 458
Using events 460
Summary 466
Quick reference 466
Chapter 21: Querying in-memory data by using query expressions 469
What is LINQ? 469
Using LINQ in a C# application 470
Selecting data 472
Filtering data 474
Ordering, grouping, and aggregating data 475
Joining data 477
Using query operators 479
Querying data in Tree<TItem> objects 481
LINQ and deferred evaluation 487
Summary 491
Quick reference 491
Chapter 22: Operator overloading 493
Understanding operators 493
Operator constraints 494
Overloaded operators 494
Creating symmetric operators 496
Understanding compound assignment evaluation 498
Declaring increment and decrement operators 499
Comparing operators in structures and classes 500
Defining operator pairs 500
Implementing operators 501
Understanding conversion operators 508
Providing built-in conversions 508
Implementing user-defined conversion operators 509
Creating symmetric operators, revisited 510
Writing conversion operators 511
Summary 513
Quick reference 514
PART IV: BUILDING UNIVERSAL WINDOWS PLATFORM APPLICATIONS WITH C#
Chapter 23: Improving throughput by using tasks 517
Why perform multitasking by using parallel processing? 517
The rise of the multicore processor 518
Implementing multitasking by using the Microsoft .NET Framework 519
Tasks, threads, and the ThreadPool 520
Creating, running, and controlling tasks 521
Using the Task class to implement parallelism 524
Abstracting tasks by using the Parallel class 536
When not to use the Parallel class 541
Canceling tasks and handling exceptions 543
The mechanics of cooperative cancellation 543
Using continuations with canceled and faulted tasks 556
Summary 557
Quick reference 557
Chapter 24: Improving response time by performing
asynchronous operations 559
Implementing asynchronous methods 560
Defining asynchronous methods: The problem 560
Defining asynchronous methods: The solution 564
Defining asynchronous methods that return values 569
Asynchronous method gotchas 570
Asynchronous methods and the Windows Runtime APIs 572
Using PLINQ to parallelize declarative data access 575
Using PLINQ to improve performance while iterating through a collection 576
Canceling a PLINQ query 580
Synchronizing concurrent access to data 581
Locking data 584
Synchronization primitives for coordinating tasks 584
Canceling synchronization 587
The concurrent collection classes 587
Using a concurrent collection and a lock to implement thread-safe data access 588
Summary 598
Quick reference 599
Chapter 25: Implementing the user interface for a Universal Windows Platform app 601
Features of a Universal Windows Platform app 602
Using the Blank App template to build a Universal Windows Platform app 605
Implementing a scalable user interface 607
Applying styles to a UI 638
Summary 649
Quick reference 649
Chapter 26: Displaying and searching for data in a Universal Windows Platform app 651
Implementing the Model-View-ViewModel pattern 651
Displaying data by using data binding 652
Modifying data by using data binding 659
Using data binding with a ComboBox control 663
Creating a ViewModel 665
Adding commands to a ViewModel 669
Searching for data using Cortana 680
Providing a vocal response to voice commands 692
Summary 695
Quick reference 696
Chapter 27: Accessing a remote database from a Universal Windows Platform app 697
Retrieving data from a database 698
Creating an entity model 703
Creating and using a REST web service 712
Inserting, updating, and deleting data through a REST web service 728
Reporting errors and updating the UI 738
Summary 746
Quick reference 747
Index 749
We've made every effort to ensure the accuracy of this book and its companion content. Any errors that have been confirmed since this book was published can be downloaded below. | http://www.informit.com/store/microsoft-visual-c-sharp-step-by-step-9781509301041?w_ptgrevartcl=Microsoft+Visual+C%23+Step+by+Step_2351722 | CC-MAIN-2017-39 | en | refinedweb |
Power management is a tricky thing to understand - and even harder thing to implement properly. The words and states that is used to describe the hardware is not the same as the states used to describe the software. The software/hardware interaction is normally defined by the Advanced Configuration and Power Interface (ACPI). ACPI defines common interfaces for hardware recognition, motherboard and device configuration and power management, and is supported by the Linux kernel natively.
The ACPI specification promotes the concept that systems should manage energy consumption by transitioning unused devices into lower power states including placing the entire system in a low-power state (sleeping state) when possible. A system is broken down into classes:
While the Global System is working (on), the processor can be in any number of states, from executing instructions at full rate (G0/C0/P0), to executing instructions at a reduced rate (G0/C0/P4), to waiting for interrupts to occur in a low power mode (G0/C1).
Device states are independent of the system, and are states of particular devices. Device states apply to any device on any bus.
The Blackfin processor provides several operating modes, each with different performance/power/latency profiles. In addition to overall clock management and gating clocks to each of the peripherals, the processor provides the control functions to dynamically alter the processor core supply voltage to further reduce power dissipation. The power states available to the hardware are:
IDLEinstruction. The processor remains in the Idle state until a peripheral or external device, generates an interrupt that requires servicing. The kernel IDLE loop is the
IDLEstate, as it saves power, but has zero overhead in responding to an interrupt.
Not all hardware states are available in the Linux kernel.
A simple hardware workaround on the SCKE Strobe made the issue go away:
Add a 6.8k Ohm resistor between SCKE (J2-81) and GND (J2-87).
The kernel supports three power management states generically, though each is dependent on platform support code to implement the low-level details for each state. Blackfin Linux currently offers Standby.
“standby”
This state offers high power savings, while providing a very low-latency transition back to a working system. No operating state is lost (the CPU retains power), so the system easily starts up again where it left off. From a Blackfin hardware perspective - the processor is in Full On, but the Clocks are slowed down to consume almost no power.
We try to put devices in a low-power state equivalent to D1, which also offers low power savings, but low resume latency. Not all devices support D1, and those that don't are left on.
A transition from
Standby to the
On state should take only a few milliseconds.
Linux Kernel Configuration -> Power management options -> [*] Power Management support [ ] Legacy Power Management API (DEPRECATED) [ ] Power Management Debug Support [*] Suspend to RAM and standby Standby Power Saving Mode (Sleep Deeper) ---> [*] Allow Wakeup from Standby by GPIO (2) GPIO number GPIO Polarity (Active High) ---> --- Possible Suspend Mem / Hibernate Wake-Up Sources [ ] Allow Wake-Up from on-chip PHY or PH6 GP
There are two different options controlling the Wakeup
Wakeup Events:
For dynamic power management, any of the peripherals can be configured to wake up the core from its idled state to process the interrupt and resume form standby, simply by enabling the appropriate bit in the system interrupt wakeup-enable register (refer to Hardware Reference Manual SIC_IWR).
If a peripheral interrupt source is enabled in SIC_IWR and the core is idled, the interrupt causes the DPMC to initiate the core wakeup sequence in order to process the interrupt.
The linux kernel API provides these three functions to enable or disable wakeup capabilities of interrupts:
int set_irq_wake(irq, state);
int disable_irq_wake(unsigned int irq)
file: include/linux/interrupt.h
scm failed with exit code 1: file does not exist in git
int enable_irq_wake(unsigned int irq)
file: include/linux/interrupt.h
scm failed with exit code 1: file does not exist in git
Example:
Following patch enables irq wake for all gpio-keys push buttons.
Index: drivers/input/keyboard/gpio_keys.c =================================================================== --- drivers/input/keyboard/gpio_keys.c (revision 4154) +++ drivers/input/keyboard/gpio_keys.c (working copy) @@ -100,7 +100,7 @@ irq, error); goto fail; } - + enable_irq_wake(irq); input_set_capability(input, type, button->code); }
In current kernel versions this feature has been added to the gpio-keys driver
This feature can be enabled by:
root:/sys/devices/platform/gpio-keys.0/power> echo enabled > wakeup
This option adds some extra code that allows specifying any Blackfin GPIO to be configured as Wakeup Strobe.
There is an alternative Blackfin specific API for GPIO wakeups:
This API allows GPIO wakeups without using the Linux interrupt API. It also allows configuring a Wakeup as EDGE or Both EDGE sensitive while the Linux kernel interrupt is configured level sensitive.
#define PM_WAKE_RISING 0x1 #define PM_WAKE_FALLING 0x2 #define PM_WAKE_HIGH 0x4 #define PM_WAKE_LOW 0x8 #define PM_WAKE_BOTH_EDGES (PM_WAKE_RISING | PM_WAKE_FALLING) #define PM_WAKE_IGNORE 0xF0 int gpio_pm_wakeup_request(unsigned gpio, unsigned char type); void gpio_pm_wakeup_free(unsigned gpio);
The table below shows the HIBERNATE and DEEP SLEEP wake-up sources for the BF60x.
[*] Suspend to RAM and standby │ │ │ │ [ ] Run-time PM core functionality │ │ │ │ [ ] Power Management Debug Support │ │ │ │ *** Possible Suspend Mem / Hibernate Wake-Up Sources *** │ │ │ │ [ ] Allow Wake-Up from PA15 │ │ │ │ [ ] Allow Wake-Up from PB15 │ │ │ │ [ ] Allow Wake-Up from PC15 │ │ │ │ [ ] Allow Wake-Up from PD06(ETH0_PHYINT) │ │ │ │ [*] Allow Wake-Up from PE12(ETH1_PHYINT, PUSH BUTTON) │ │ │ │ (1) Wake-up priority │ │ │ │ [ ] Allow Wake-Up from PG04(CAN0_RX) │ │ │ │ [ ] Allow Wake-Up from PG13 │ │ │ │ [ ] Allow Wake-Up from (USB) │ │
The power management subsystem provides a unified sysfs interface to userspace, regardless of what architecture or platform one is running. The interface exists in /sys/power/ directory (assuming sysfs is mounted at /sys).
/sys/power/state controls system power state. Reading from this file returns what states are supported, which is hard-coded to 'standby' (Power-On Suspend), 'mem' (Suspend-to-RAM), and 'disk' (Suspend-to-Disk).
Blackfin Linux supports:
Writing to this file one of those strings causes the system to transition into that state. Please see the file Documentation/power/states.txt for a description of each of those states.
root:~> echo standby > standby wakeup from "standby" at Thu Jan 1 01:45:31 1970 Syncing filesystems ... done. Freezing user space processes ... (elapsed 0.00 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done. Suspending console(s) Restarting tasks ... done. root:/>
root:~> echo mem > mem wakeup from "mem" at Thu Jan 1 01:45:31 1970 Syncing filesystems ... done. Freezing user space processes ... (elapsed 0.00 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done. Suspending console(s) Restarting tasks ... done. root:/>
#include <stdio.h> #include <getopt.h> #include <fcntl.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <errno.h> static void suspend_system(const char *suspend) { char buf[20]; int f = open("/sys/power/state", O_WRONLY); int len; ssize_t n; if (f < 0) { perror("open /sys/power/state"); return; } len = sprintf(buf, "%s\n", suspend) - 1; len = strlen(buf); n = write(f, buf, len); /* this executes after wake from suspend */ if (n < 0) perror("write /sys/power/state"); else if (n != len) fprintf(stderr, "short write to %s\n", "/sys/power/state"); close(f); } int main(int argc, char **argv) { static char *suspend = "standby"; printf("Going into %s ...\n",suspend); suspend_system(suspend); printf("Awakeing from %s ...\n",suspend); return 0; }
The power wake up times are different among Linux system core driver, Linux generic peripheral driver and Linux application.
In 2012R1 Linux distribution for BF60X:
Many operating conditions can affect power dissipation/consumption. System designers should refer to Estimating Power for ADSP-BF531/BF532/BF533 Blackfin Processors (EE-229) on the Analog Devices website ()
EE229 This document provides detailed information
In general:
Derived Power Consumption (PDDTOT)
Internal Power Consumption (PDDINT)
External Power Consumption (PDDEXT, PDDRTC)
The Standby/sleep mode reduces dynamic power dissipation by disabling the clock to the processor core (CCLK).
Furthermore, Standby/sleep_deeper.
Complete Table of Contents/Topics | https://docs.blackfin.uclinux.org/doku.php?id=power_management_support | CC-MAIN-2017-39 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Error on Method Calculating Quantity Available less Outgoing Quantity in Function Field.
I am trying to implement the following code in order to create a function field in my product model that will give me the qty_available result less the outgoing_qty result. I am currently getting a: 'NoneType' object has no attribute 'qty_available' error. I am assuming that is because I am trying to get the value of qty_available in the incorrect way. What adjustments should I make to my code?
from openerp.osv import fields, osv
class real_inventory_counter(osv.osv):
_inherit = "product.product"
def real_inventory_count(self, cr, uid, arg, ids, field_name, context=None):
result = {}
for product in self.browse(cr, uid, ids, context):
result[product.id] = product.qty_available - product.outgoing_qty
return result
_columns = {
'testing_time': fields.integer('Test Field', help='Just a field for testing'),
'real_inventory_count': fields.function(real_inventory_count, type='float', string='Real Inventory Count'),
}
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/error-on-method-calculating-quantity-available-less-outgoing-quantity-in-function-field-72911 | CC-MAIN-2017-39 | en | refinedweb |
It’s been few months that we pulled the wrap off FabrikamShipping SaaS, and the response (example here) has been just great: I am glad you guys are finding the sample useful!
In fact, FabrikamShipping SaaS contains really a lot of interesting stuff and I am guilty of not having found the time to highlight the various scenarios, lessons learned and reusable little code gems it contains. Right now, the exploratory material is limited to the intro video, the recording of my session at TechEd Europe and the StartHere pages of the source code & enterprise companion packages.
We designed the online demo instance and the downloadable packages to be as user-friendly as we could, and in fact we have tens of people creating tenants every day, but it’s undeniable that some more documentation may help to zero on the most interesting scenarios. Hence, I am going to start writing more about the demo. Some times we’ll dive very deep in code and architecture, some other we’ll stay at higher level.
I’ll begin by walking you through the process of subscribing to a small business edition instance of FabrikamShipping: the beauty of this demo angle is that all you need to experience it is a browser, an internet connection and one or more accounts at Live, Google or Facebook. Despite of the nimble requirements, however, this demo path demonstrates many important concepts for SaaS and cloud based solutions: in fact, I am told it is the demo that most often my field colleagues use in their presentations, events and engagements.
Last thing before diving in: I am going to organize this as instructions you can follow for going thru the demo, almost as a script, so that you can get the big picture reasonably fast; I will expand on the details on later posts.
Subscribing to a Small Business Edition instance of FabrikamShipping
Let’s say that today you are Adam Carter: you work for Contoso7, a fictional startup, and you are responsible for the logistic operations. Part of the Contoso7 business entails sending products to their customers, and you are tasked with finding a solution for handling Contoso7’s shipping needs. You have no resources (or desire) to maintain software in-house for a commodity function such as shipping, hence you are on the hunt for a SaaS solution that can give you what you need just by pointing your browser to the right place.
Contoso7 employees are mostly remote; furthermore, there is a seasonal component in Contoso7 business which requires a lot of workers in the summer and significantly less stuff in the winter. As a result, Contoso7 does not keep accounts for those workers in a directory, but asks them to use their email and accounts from web providers such as Google, Live Id, or even Facebook.
In your hunt for the right solution, you stumble on FabrikamShipping: it turns out they offer a great shipping solution, delivered as a monthly subscription service to a customized instance of their application. The small business edition is super-affordable, and it supports authentication from web providers. It’s a go!
You navigate to the application home page at, and sign up for one instance.
As mentioned, the Small Business Edition is the right fit for you; hence, you just click on the associated button.
Before everything else, FabrikamShipping establishes a secure session: in order to define your instance, you’ll have to input information you may not want to share too widely! FabrikamShipping also needs to establish a business relationship with you: if you will successfully complete the onboarding process, the identity you use here will be the one associated to all the subscription administration activities.
You can choose to sign in from any of the identity providers offered above. FabrikamShipping trusts ACS to broker all its authentication need: in fact, the list of supported IPs comes from directly from the FabrikamShipping namespace in ACS. Pick any IP you like!
In this case, I have picked a live id. Note for the demoers: the identity you use at this point is associated to your subscription, and is also the way in which FabrikamShipping determines which instance you should administer when you come back to the management console. You can only have one instance associated to one identity, hence once you create a subscription with this identity you won’t be able to re-use the same identity for creating a NEW subscription until the tenant gets deleted (typically every 3 days).
Once you authenticated, FabrikamShipping starts the secure session in which you’ll provide the details of your instance. The sequence of tabs you see on top of the page represent the sequence of steps you need to go through: the FabrikamShipping code contains a generic provisioning engine which can adapt to different provisioning processes to accommodate multiple editions, and it sports a generic UI engine which can adapt to it as well. The flow here is specific to the small business edition.
The first screen gathers basic information about your business: the name of the company, the email address at which you want to receive notifications, which Windows Azure data center you want your app to run on, and so on. Fill the form and hit Next.
In this screen you can define the list of the users that will have access to your soon-to-appear application instance for Contoso7.
Users of a Small Business instance authenticate via web identity providers: this means that at authentication time you won’t receive a whole lot of information in form of claims, some times you’ll just get an identifier. However, in order to operate the shipping application every user need some profile information (name, phone, etc) and the level of access it will be granted to the application features (i.e., roles).
As a result, you as the subscription administrator need to enter that information about your users; furthermore, you need to specify for every user a valid email address so that FabrikamShipping can generate invitation emails with activation links in them (more details below).
In this case, I am adding myself (ie Adam Carter) as an application user (the subscription administrator is not added automatically) and using the same hotmail account I used before. Make sure you use an email address you actually have access to, or you won’t receive notifications you need for moving forward in the demo. Once you filled in all fields, you can click Add as New for adding the entry in the users’ list.
For good measure I always add another user for the instance, typically with a gmail or Facebook account. I like the idea of showing that the same instance of a SaaS app can be accessed by users coming from different IPs, something that before the rise of the social would have been considered weird at best
Once you are satisfied with your list of users, you can click Next.
The last screen summarizes your main instance options: if you are satisfied, you can hit Subscribe and get FabrikamShipping to start the provisioning process which will create your instance.
Note: on a real-life solution this would be the moment to show the color of your money. FabrikamShipping is nicely integrated with the Adaptive Payment APIs and demonstrates both explicit payments and automated, preapproved charging from Windows Azure. I think it is real cool, and that it deserves a dedicated blog post: also, in order to work it requires you to have an account with the PayPal developer sandbox, hence this would add steps to the flow: more reasons to defer it to another post.
Alrighty, hit Subscribe!
FabrikamShipping thanks you for your business, and tells you that your instance will be ready within 48 hours. In reality that’s the SLA for the enterprise edition, which I’ll describe in another post, for the Small Business one we are WAAAY faster. If you click on the link for verifying the provisioning status, you’ll have proof.
Here you entered the Management Console: now you are officially a Fabrikam customer, and you get to manage your instance.
The workflow you see above is, once again, a customizable component of the sample: the Enterprise edition one would be muuuch longer. In fact, you can just hit F5 a few times and you’ll see that the entire thing will turn green in typically less than 30 seconds. That means that your Contoso7 instance of FabrikamShipping is ready!
Now: what happened in those few seconds between hitting Subscribe and the workflow turning green? Quite a lot of things. The provisioning engine creates a dedicated instance of the app database in SQL Azure, creates the database of the profiles and the various invitation tickets, add the proper entry in the Windows Azure store which tracks tenants and options, creates dedicated certificates and upload them in ACS, creates entries in ACS for the new relying party and issuer, sends email notifications to the subscriber and invites to the users, and many other small things which are needed for presenting Contoso7 with a personalized instance of FabrikamShipping. There are so many interesting things taking place there that for this too we’ll need a specific post. The bottom line here is: the PaaS capabilities offered by the WIndows Azure platform are what made it possible for us to put together something so sophisticated as a sample, instead of requiring the armies of developers you’d need for implementing features like the ones above from scratch. With the management APIs from Windows Azure, SQL Azure and ACS we can literally build the provisioning process as if we’d be playing with Lego blocks.
Activating One Account and Accessing the Instance
The instance is ready. Awesome! Now, how to start using it? The first thing Adam needs to do is check his email.
Above you can see that Adam received two mails from FabrikamShipping: let’s take a look to the first one.
The first mail informs Adam, in his capacity of subscription manager, that the instance he paid for is now ready to start producing return on investment. It provides the address of the instance, that in good SaaS tradition is of the form http://<applicationname>/<tenant>, and explains how the instance work: here there’s the instance address, your users all received activation invitations, this is just a sample hence the instance will be gone in few days, and similar. Great. If we want to start using the app, Adam needs to drop the subscription manager hat and pick up the one of application user. For this, we need to open the next message.
This message is for Adam the user. It contains a link to an activation page (in fact we are using MVC) which will take care of associating the record in the profile with the token Adam will use for the sign-up. As you can imagine, the activation link is unique for every user and becomes useless once it’s been used. Let’s click on the activation link.
Here we are already on the Contoso7 instance, as you can see from the logo (here I uploaded a random image (not really random, it’s the logo of my WP7 free English-Chinese dictionary app (in fact, it’s my Chinese seal
))). Once again, the list of identity providers is rendered from a list dynamically provided by the ACS: although ACS provides a ready-to-use page for picking IPs, the approach shown here allows Fabrikam to maintain a consistent look and feel and give continuity of experience, customize the message to make the user aware of the significance of this specific step (sign-up), and so on. Take a peek at the source code to see how that’s done.
Let’s say that Adam picks live id: as he is already authenticated with it from the former steps, the association happens automatically.
The page confirms that the current account has been associated to the profile; to prove it, we can now finally access the Contoso7 instance. We can go back to the mail and follow the provided link, or use directly the link in the page here.
This is the page every Contoso7 user will see when landing on their instance: it may look very similar to the sign-up page above, but notice the different message clarifying that this is a sign-in screen.
As Adam is already authenticated with Live ID, as soon as he hits the link he gets redirected to ACS, gets a token and uses it to authenticate with the instance. Behind the scenes, Windows Identity Foundation uses a custom ClaimsAuthenticationManager to shred the incoming token: it verifies that the user is accessing the right tenant (tenant isolation is king), then retrieves form SQL Azure the profile data and adds them as claims in the current context (there are solid reasons for which we store those at the RP side, once again: stuff for another post). As a result, Adam gets all his attributes and roles dehydrated in the current context and the app can take advantage of claims based identity for customizing the experience and restrict access as appropriate. In practical terms, that means that Adam’s sender data are pre-populated: and that Adam can do pretty much what he wants with the app, since he is in the Shipping Manager role that he self-awarded to his user at subscription time.
In less than 5 minutes, if he is a fast typist, Adam got for his company a shipping solution; all the users already received instructions on how to get started, and Adam himself can already send packages around. Life is good!
Works with Google, too! And all the Others*
*in the Lost sense
Let’s leave Adam for a moment and let’s walk few clicks in the Joe’s mouse. If you recall the subscription process, you’ll remember that Adam defined two users: himself and Joe. Joe is on gmail: let’s go take a look to what he got. If you are doing this from the same machine as before: remember to close all browsers or you risk to carry forward existing authentication sessions!
Joe is “just” a user, hence he received only the user activation email.
The mail is absolutely analogous to the activation mail received by Adam: the only differences are the activation link, specific to Joe’s profile, and how gmail renders HTML mails.
Let’s follow the activation link.
Joe gets the same sign-up UI we observed with Adam: but this time Joe has a gmail account, hence we’ll pick the Google option.
ACS connects with google via the OpenID protocol: the UI above is what google shows you when an application (in this case the ACS endpoint used by FabrikamShipping) requests an attribute exchange transaction, so that Joe can give or refuse his consent to the exchange. Of course Joe knows that the app is trusted, as he got a headsup from Adam, and he gives his consent. This will cause one token to flow to the ACS, which will transform it and make it available for the browser to authenticate with FabrikamShipping. From now on, we already know what will happen: the token will be matched with the profile connected to this activation page, a link will be established and the ticket will be voided. Joe just joined the Contoso7’s FabrikamShipping instance family!
And now, same drill as before: in order to access the instance, all Joe needs to do is click on the link above or use the link in the notification (better to bookmark it).
Joe picks google as his IP…
..and since he flagged “remember this approval” at sign-up time, he’ll just see the page above briefly flashing in the browser and will get authenticated without further clicks.
And here we are! Joe is logged in the Contoso7 instance of FabrikamShipping.
As you can see in the upper right corner, his role is Shipping Creator, as assigned by Adam at subscription time. That means that he can create new shipments, but he cannot modify existing ones. If you want to double check that, just go through the shipment creation wizard, verify that it works and then try to modify the newly created shipment: you’ll see that the first operation will succeed, and the second will fail. Close the browser, reopen the Contoso7 instance, sign in again as Adam and verify that you are instead able to do both creation and modifications. Of course the main SaaS explanatory value of this demo is in the provisioning rather than the application itself, but it’s nice to know that the instances itself actually use the claims as well.
Aaand that’s it for creating and consuming Small Business edition instances. Seems long? Well, it takes long to write it down: but with a good form filler, I can do the entire demo walkthrough above well under 3 minutes. Also: this is just one of the possible path, but you can add your own spins & variations (for example, I am sure that a lot of people will want to try using facebook). The source code is fully available, hence if you want to add new identity providers (yahoo, ADFS instances or arbitrary OpenID providers are all super-easy to add) you can definitely have fun with it.
Now that you saw the flow from the customer perspective, in one of the next installments we’ll take a look at some of the inner workings of our implementation: but now… it’s Saturday night, and I better leave the PC alone before they come to grab my hair and drag me away from it
| https://blogs.msdn.microsoft.com/vbertocci/2011/02/12/fun-with-fabrikamshipping-saas-i-creating-a-small-business-edition-instance/ | CC-MAIN-2018-34 | en | refinedweb |
Introduction: ESP8266: Parsing JSON
As promised in my previous instructable, I will be covering more about the ArduinoJson library in detail, in this instructable. JSON (JavaScript Object Notation) is a lightweight data-interchange format that is easy for humans to read and write, and easy for machines to parse and generate. JSON objects are written in key/value pairs and it's a must for keys to be of the string data type while values can be a string, number, object, array, boolean or null. A vast majority of APIs that are now being used will return JSON data when called, and knowing how to parse them will definitely benefit you.
In this instructable, we will be using the ArduinoJson library for the ESP8266 to help us parse JSON data and extract values based on keys. The ArduinoJson library is also capable of serializing JSON, meaning you could generate your own JSON data using data from sensors connected to your ESP8266 or Arduino for example (will be covering more about JSON serialization, in detail, in another instructable). So, let's get started.
This project was done by me, Nikhil Raghavendra, a Diploma in Computer
Engineering student from Singapore Polytechnic, School of Electrical and Electronic Engineering, under the guidance of my mentor Mr Teo Shin Jen.
Step 1: Install the ArduinoJson Library coloured .
Step 2: Performing a GET Request
Before we can start parsing, we need to have the JSON data in the first place and to obtain our data, we perform a GET request. A GET request, as the name suggests, gets the data for us from a particular location using a specific URL. The boilerplate code to perform the GET request can be found below. For this example, we will be performing a GET request using the URL. You can call any API you like.
#include <ESP8266WiFi.h><br> } http.end(); //Close connection } // Delay delay(60000); }
The data that we are going to parse is contained in the payload variable. We don't actually need this variable when we are parsing our data later on.
Step 3: Using the ArduinoJson Assistant
The developers who developed the ArduinoJson library are so kind that they've even created an Assistant that writes the parser program for us using any JSON data as an input. To use the ArduinoJson assistant, you first need to know how your JSON is formatted and to do that, type in the URL that we used to perform the GET request earlier on into the browser of your choice and hit enter. Copy the JSON and head over to the ArduinoJson Assistant's web page and paste it into the text box below the label named "Input". Then scroll down to take a look at the parsing program generated by the Assistant. Copy the whole program or just a section of it.
Step 4: Completing the Code and the Result
Copying and pasting the parsing program generated by the Assistant into the boilerplate code that we used to perform a GET request earlier on would look like this:
) { // Parsing const size_t bufferSize = JSON_OBJECT_SIZE(2) + JSON_OBJECT_SIZE(3) + JSON_OBJECT_SIZE(5) + JSON_OBJECT_SIZE(8) + 370; DynamicJsonBuffer jsonBuffer(bufferSize); JsonObject& root = jsonBuffer.parseObject(http.getString()); // Parameters int id = root["id"]; // 1 const char* name = root["name"]; // "Leanne Graham" const char* username = root["username"]; // "Bret" const char* email = root["email"]; // "Sincere@april.biz" // Output to serial monitor Serial.print("Name:"); Serial.println(name); Serial.print("Username:"); Serial.println(username); Serial.print("Email:"); Serial.println(email); } http.end(); //Close connection } // Delay delay(60000); }
Since we are only interested in the name, email and username of the user, we just used a section of the parsing program generated by the assistant. You can use the serial monitor to view the output. If you don't see anything, press the reset button on your ESP8266 and *boom* you should see the output there. Note: The last line of code in the code above introduces a delay of 1 minute or 60,000 ms into the loop. This means that the API is only called once every minute. The number of times an API can be called within a specified timeframe varies and you are strongly encouraged to follow the guidelines specified by your API provider.
7 Discussions
Hi!
Nice work on this tutorial!
Remark that you can avoid the "payload" string by passing "http.getStream()" to "parseObject()".
[delete]
Would be possible adapt the code to use it with https?
Thanks for the article!
Could you please provide us the wiring diagram to connect esp8266 with arduino??
Hey, thanks a lot for the article. I'm trying to apply this to (I'm trying to get the titles of posts) and for some reason nothing returns (even though httpCode returns as 301). I pasted my code here: I used ArduinoJson assistant and changed the relevant parts in your code. I tried the suggestions in the "Why Parsing Fails" page of arduinojson.org to no avail. Can it be a memory problem, given that Reddit's JSON is considerably larger than the one in your example? I'm a beginner and still trying to wrap my mind around all of this. Thanks in advance.
Im trying to use this api:
and I always get httpCode value = -1
I tried to use https, same result..
Could you help me? Thank you
Hi, there seems to be a problem with the server's SSL certificate, it's either too large or they could have blocked off non-browser agents from accessing the API. I tried connecting to the service using HTTPS and the SHA1 fingerprints don't match every time I run it. The certificate size could be to blame. I will try again and will let you know if it works.
Any more info on this issue? I also get httpCode = -1 when trying to get data from or | https://www.instructables.com/id/ESP8266-Parsing-JSON/ | CC-MAIN-2018-34 | en | refinedweb |
Im lost what did i do wrong
Make sure that the_flying_circus() returns True
def the_flying_circus(True):
if True > False and not True < False: # Start coding here!
# Don't forget to indent
# the code inside this block!
elif True < False or True > False:
# Keep going here.
# You'll want to add the else statement, too! | https://discuss.codecademy.com/t/the-big-if-im-lost/29627 | CC-MAIN-2018-34 | en | refinedweb |
tag:blogger.com,1999:blog-65001863436458291142018-07-17T06:12:25.860+01:00Saved before I forgetThoughts on anything, though likely related to software developmentNick Gommansnoreply@blogger.comBlogger10125tag:blogger.com,1999:blog-6500186343645829114.post-78530830704808359322014-04-20T23:56:00.000+01:002014-04-20T23:59:13.513+01:00Gmail - Purging promotional emailsToday <a href=""></a> which gave me what I was aiming to do using Google scripts.<br /><br /><a name='more'></a><br /><br />What.<br /><br />I created my labels and assigned a few dozen emails to the various buckets (for example: things like Google calendar reminders, Amazon Local and Living Social for example are in my daily purge and some of my monthly newsletters are in the 30 day purge rule).<br /><br />Then head to <a href=""></a> and setup a new "Gmail" project which gives you a bunch of template functions. I then added my own functions which I've copied in here for convenience:<br /><br /><pre class="brush:javascript">/**<br /> * Trash all emails with "Delete/1 Day" after 1 day<br /> */<br />function deleteByLabelDelete1Day() { <br /> deleteByLabel('delete/1 day', 1);<br />}<br /><br />/**<br /> * Trash all emails with "Delete/7 Days" after 7 days<br /> */<br />function deleteByLabelDelete7Days() { <br /> deleteByLabel('delete/7 days', 7);<br />}<br /><br />/**<br /> * Trash all emails with "Delete/14 Days" after 14 days<br /> */<br />function deleteByLabelDelete14Days() { <br /> deleteByLabel('delete/14 days', 14);<br />}<br /><br />/**<br /> * Trash all emails with "Delete/30 Days" after 30 days<br /> */<br />function deleteByLabelDelete30Days() { <br /> deleteByLabel('delete/30 days', 30);<br />}<br /><br />/**<br /> * Trash all emails matching specified "label" older than "days"<br /> */<br />function deleteByLabel(label, days) { <br /> var lbl = GmailApp.getUserLabelByName(label);<br /> var</a><br />The.<br /><ul><li>The <strong>Source control</strong> solution I'm currently reviewing <a href="">Bitbucket</a> from Atlassian which seems to meet my needs on the freebie version at the moment.</li><li>For the <strong>image gallery</strong> I'm currently reviewing <a href="">Smugmug</a> which has a fair amount of customization capabilities and supports reading caption and geo-tagged data from my uploaded pictures. It also supports custom DNS on the non professional contracts.</li><li>For <strong>blogging</strong> I have decided to give <a href="">Blogger</a> a go considering it allows custom DNS for free and has been quick and painless to setup. As a plus the Android app seems good enough to use as an editor to crank out some basic content and reclaim some lost time commuting to work. </li><li>For <b>online backup and storage</b> I've been reviewing <a href="">IDrive</a>.</li></ul>Migrating content has actually been rather fun reminding me of some of my previous trips and some of the challenges I have posted solutions to in the past. The migration will be complete on or before June 10th as that is when my hosting contract is up. I plan to have things up to full speed by the middle of the summer with new content and a more regular posting schedule.<br /><br />NickNick Gommans 0.248894000000063951.476941999999987 0.0875325000000639 51.634885999999987 0.41025550000006389tag:blogger.com,1999:blog-6500186343645829114.post-33951686755385028472010-08-04T16:04:00.000+01:002014-04-20T23:55:20.753+01:00Handling custom errors in WCF[<a href="" target="_blank" title="CustomExceptions.zip">Download Full Solution</a>]<br /><br />In WCF the preferred way to handle custom error/exception states is through creating an object that can be serialized containing the fault information, and assigning it against an <a href="" title="OperationContractAttribute">OperationContractAttribute</a> decorated method using the <a href="" title="FaultContractAttribute">FaultContractAttribute</a>. Assigning a FaultContractAttribute provides an alternate object type to pass back to the client. The default action on the client is to raise a <a href="" title="FaultException<TDetail>">FaultException<MyCustomFault></a> which allows for the returned fault to be returned in the generic type.<br /><br /><a name='more'></a><br /><br / <a href="" title="IErrorHandler interface">IErrorHandler</a> implementation which greatly simplifies error management by centralizing it as part of the WCF configuration rather than the service configuration.<br /><br /.<br /><br /.<br /><br />Step one is to setup a basic service and client. Here's the extra three methods added to IService1:<br /><br /><pre class="brush:c#">using System.Runtime.Serialization;<br />using System.ServiceModel;<br />using CA.NGommans.Wcf.CustomExceptions.Common;<br /><br />namespace SampleService<br />{<br /> [ServiceContract]<br /> public interface IService1<br /> {<br /> #region CustomException demo<br /><br /> [OperationContract]<br /> [FaultContract(typeof(CustomFault))]<br /> string WillThrowArgumentException();<br /> <br /> [OperationContract]<br /> [FaultContract(typeof(CustomFault))]<br /> string WillThrowSharedException();<br /><br /> [OperationContract]<br /> [FaultContract(typeof(CustomFault))]<br /> string WillThrowSomeException();<br /><br /> #endregion<br /> }<br />}<br /></pre>As you can see from the above we have our FaultContract specified as a CustomFault type. We will explain the CustomFault object a little further on. Below is the implementation for each of the above which will throw our exceptions.<br /><br /><pre class="brush:c#">using System;<br />using CA.NGommans.Wcf.CustomExceptions.Common;<br />using CA.NGommans.Wcf.CustomExceptions.Service;<br /><br />namespace SampleService<br />{<br /> public class Service1 : IService1<br /> {<br /> #region CustomException demo<br /><br /> public string WillThrowArgumentException()<br /> {<br /> throw new ArgumentException("Something bad happened.");<br /> }<br /><br /> public string WillThrowSharedException()<br /> {<br /> throw new SharedException("A shared exception - something bad happened.");<br /> }<br /><br /> public string WillThrowSomeException()<br /> {<br /> throw new SomeException("Some exception - something bad happened.");<br /> }<br /><br /> #endregion<br /> }<br />}<br /></pre>We define "SomeException" in our Service project which is just an exception with the no parameter and string message parameter constructors directing to the base implementation provided by Exception.<br /><br />In our shared library between the service and client we need to include our "Shared" exception, and of course our fault contract.<br /><br /><pre class="brush:c#">using System;<br />using System.Runtime.Serialization;<br /><br />namespace CA.NGommans.Wcf.CustomExceptions.Common<br />{<br /> /// <summary><br /> /// Custom fault class for transporting exception data over the wire<br /> /// </summary><br /> [DataContract(Namespace = CustomFaultNamespace)]<br /> [Serializable]<br /> public class CustomFault<br /> {<br /> /// <summary><br /> /// Custom fault namespace<br /> /// </summary><br /> public const stringException to use public CustomFault(Exception exception) { if (exception != null) { ExceptionType = exception.GetType().AssemblyQualifiedName; Message = exception.Message; } } } /// <summary> /// Our special SharedException type which can be accessed from both /// server and client for the purposes of this demonstration /// </summary> public class SharedException : Exception { public SharedException() : base() { } public SharedException(string message) : base(message) { } } } </pre: <pre class="brush:c#">using System;<br />using System.ServiceModel.Configuration;<br /><br />namespace CA.NGommans.Wcf.CustomExceptions.Common.Dispatch<br />{<br /> /// <summary><br /> /// Exception behavior element for attaching exception behavior to WCF services and clients<br /> /// </summary><br /> public class ExceptionBehaviorElement : BehaviorExtensionElement<br /> {<br /> /// <summary><br /> /// Type of behavior to expose (<see cref="ExceptionBehavior"/>)<br /> /// </summary><br /> public override Type BehaviorType<br /> {<br /> get { return typeof(ExceptionBehavior); }<br /> }<br /><br /> /// <summary><br /> /// Create an instance of our behavior<br /> /// </summary><br /> /// <returns>Returns a new <see cref="ExceptionBehavior"/></returns><br /> protected override object CreateBehavior()<br /> {<br /> return new ExceptionBehavior();<br /> }<br /> }<br />}<br /></pre>With the above we can now create our "ExceptionBehavior" as per below. Note the code has been compressed to include only altered contracts. All other required interface methods are completely blank. <pre class="brush:c#">using System.Collections.ObjectModel;<br />using System.ServiceModel;<br />using System.ServiceModel.Channels;<br />using System.ServiceModel.Description;<br />using System.ServiceModel.Dispatcher;<br /><br />namespace CA.NGommans.Wcf.CustomExceptions.Common.Dispatch<br />{<br /> /// <summary><br /> /// Server/Client behavior which will catch/encode exceptions, and decode them (less the stack) on the client side.<br /> /// </summary><br /> internal class ExceptionBehavior : IEndpointBehavior, IContractBehavior, IServiceBehavior<br /> {<br /> void IEndpointBehavior.ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime runtime)<br /> {<br /> // Target for endpoint behaviors is WCF clients only<br /> this.ApplyClientBehavior(runtime);<br /> }<br /><br /> void IContractBehavior.ApplyClientBehavior(ContractDescription contract, ServiceEndpoint endpoint, ClientRuntime runtime)<br /> {<br /> // Target for contract behaviors is WCF clients only<br /> this.ApplyClientBehavior(runtime);<br /> }<br /><br /> void IServiceBehavior.ApplyDispatchBehavior(ServiceDescription description, ServiceHostBase host)<br /> {<br /> // Target for service behaviors is our server side error handler<br /> ApplyExceptionBehavior(host);<br /> }<br /><br /> /// <summary><br /> /// Client message inspector which will track an exception and throw it instead of the FaultException class.<br /> /// </summary><br /> /// <param name="runtime">WCF client runtime</param><br /> private void ApplyClientBehavior(ClientRuntime runtime)<br /> {<br /> // Don’t add a message inspector if it already exists<br /> foreach (IClientMessageInspector inspector in runtime.MessageInspectors)<br /> {<br /> if (inspector is ExceptionMessageInspector)<br /> {<br /> return;<br /> }<br /> }<br /> runtime.MessageInspectors.Add(new ExceptionMessageInspector());<br /> }<br /><br /> /// <summary><br /> /// Server exception behavior which will trap thrown exceptions and encode them into our <see cref="CustomFault"/><br /> /// </summary><br /> /// <param name="host">WCF service host</param><br /> private void ApplyExceptionBehavior(ServiceHostBase host)<br /> {<br /> // Ensure we only add this once per channel<br /> foreach (ChannelDispatcher dispatcher in host.ChannelDispatchers)<br /> {<br /> bool addErrorHandler = true;<br /> foreach (IErrorHandler handler in dispatcher.ErrorHandlers)<br /> {<br /> if (handler is ExceptionHandler)<br /> {<br /> addErrorHandler = false;<br /> break;<br /> }<br /> }<br /><br /> if (addErrorHandler)<br /> {<br /> dispatcher.ErrorHandlers.Add(new ExceptionHandler());<br /> }<br /> }<br /> }<br /> }<br />}<br /></pre>Now in order to satisfy the above we need the real bits of the puzzle now: our IErrorHandler, and our IClientMessageInspector: <pre class="brush:c#">using System;<br />using System.Reflection;<br />using System.ServiceModel;<br />using System.ServiceModel.Channels;<br />using System.ServiceModel.Dispatcher;<br />using System.Xml;<br /><br />namespace CA.NGommans.Wcf.CustomExceptions.Common.Dispatch<br />{<br /> /// <summary><br /> /// Server side exception handler. This class will intercept exceptions and setup a message which<br /> /// encodes the<br /> /// </summary><br /> internal class ExceptionHandler : IErrorHandler<br /> {<br /> #region IErrorHandler Members<br /><br /> /// <summary><br /> /// Since we are just serializing the type and the message we can support any type.<br /> /// </summary><br /> /// <param name="exception">Exception to encode</param><br /> /// <returns>Always returns true since we always encode the exception</returns><br /> public bool HandleError(Exception exception)<br /> {<br /> return true;<br /> }<br /><br /> /// <summary><br /> /// Generate a fault<br /> /// </summary><br /> /// <param name="exception">Exception to encode</param><br /> /// <param name="version">Message version</param><br /> /// <param name="message">Message reference (returned fault)</param><br /> public void ProvideFault(Exception exception, MessageVersion version, ref Message message)<br /> {<br /> // Wraps the exception type and message into our "CustomFault" before raising it to the client<br /> FaultException<CustomFault> faultexception = new FaultException<CustomFault>(new CustomFault(exception));<br /> MessageFault fault = faultexception.CreateMessageFault();<br /> message = Message.CreateMessage(version, fault, CustomFault.CustomFaultNamespace);<br /> }<br /><br /> #endregion<br /> }<br /><br /> /// <summary><br /> /// Attached to a client this message inspector will intercept fault exceptions and rethrow those of<br /> /// type <see cref="CustomFault"/> as their unwrapped exception (less the stack trace).<br /> /// </summary><br /> internal class ExceptionMessageInspector : IClientMessageInspector<br /> {<br /> #region IClientMessageInspector Members<br /><br /> /// <summary><br /> /// Intercept faulted replies and where they are of type "CustomFault" unwrap and raise<br /> /// the underlying exception (if known)<br /> /// </summary><br /> /// <param name="reply">Message from server</param><br /> /// <param name="correlationState">request/response state object</param><br /> public void AfterReceiveReply(ref Message reply, object correlationState)<br /> {<br /> if (reply.IsFault)<br /> {<br /> // Copy message just in case we don't want to throw an exception<br /> MessageBuffer buffer = reply.CreateBufferedCopy(Int32.MaxValue);<br /> Message copy = buffer.CreateMessage();<br /> reply = buffer.CreateMessage();<br /><br /> MessageFault fault = MessageFault.CreateFault(copy, int.MaxValue);<br /> if (fault.HasDetail)<br /> {<br /> XmlDictionaryReader reader = fault.GetReaderAtDetailContents();<br /> if (reader.Name == "CustomFault") // Although this works it's not ideal<br /> {<br /> CustomFault customFault = fault.GetDetail<CustomFault>();<br /> if( customFault != null)<br /> {<br /> Exception result = null;<br /><br /> // Exception type must be a valid type string...<br /> if( !string.IsNullOrEmpty(customFault.ExceptionType))<br /> {<br /> try<br /> {<br /> // Try to get a constructor we can use which needs to have one parameter for "message"<br /> Type exceptionType = Type.GetType(customFault.ExceptionType);<br /> if (exceptionType != null)<br /> {<br /> ConstructorInfo ci = exceptionType.GetConstructor(new Type[] { typeof(string) });<br /> result = ci.Invoke(new object[] { customFault.Message }) as Exception;<br /> }<br /> }<br /> catch(Exception)<br /> {<br /> // Do nothing<br /> }<br /> }<br /><br /> // Raise the exception if we could decode it<br /> // Instead, you could raise a general exception if the type could not be resolved<br /> if( result != null )<br /> {<br /> throw result;<br /> }<br /> }<br /> }<br /> }<br /> }<br /> }<br /><br /> public object BeforeSendRequest(ref Message request, IClientChannel channel)<br /> {<br /> return null;<br /> }<br /> #endregion<br /> }<br />}<br /></pre: <pre class="brush:xml"><?xml version="1.0"?> /> <behaviors><br /> <serviceBehaviors><br /> <behavior><br />...<br /> <!-- Exception encoding behavior --><br /> <encodeExceptions /><br /> </behavior><br /> </serviceBehaviors><br /> </behaviors><br />...<br /> </system.serviceModel> <br /></configuration><br /></pre>Client <pre class="brush:xml"><?xml version="1.0" encoding="utf-8" ?> />...<br /> <client><br /> <endpoint address="" binding="basicHttpBinding"<br /><br /> </client><br />...<br /> <behaviors><br /> <endpointBehaviors><br /> <behavior name="exceptions"><br /> <encodeExceptions /><br /> </behavior><br /> </endpointBehaviors><br /> </behaviors><br /> </system.serviceModel><br /></configuration><br /></pre).Nick Gommans Proxy Generation using "RealProxy"[<a href="" target="_blank" title="ClientCaching.zip">Download Full Solution</a>]<br /><br />I've been looking around the top side of the WCF client proxy for a nice place to hook in contract specific caching. I was hoping to find soemthing similar to the <a href="" title="IOperationInvoker">IOperationInvoker</a>...<br /><br /><a name='more'></a><br /.<br /><br />First order of business is to write an implementation of the <a href="" title="IChannelFactory<TChannel>">IChannelFactory<TChannel></a><br /><br /><pre class="brush:c#">using System;<br />using System.ServiceModel;<br />using System.ServiceModel.Channels;<br />using System.ServiceModel.Description;<br /><br />namespace CA.NGommans.Wcf.Caching<br />{<br /> /// <summary><br /> /// Caching channel factory<br /> /// </summary><br /> /// <typeparam name="T">Type to generate</typeparam><br /> public class CachingChannelFactory<T> : ChannelFactory<T><br /> {<br /> /// <summary><br /> /// Get/Set local caching implementation<br /> /// </summary><br /> public T LocalCacheImplementation { get; set; }<br /><br /> #region Constructors<br /><br /> public CachingChannelFactory()<br /> : base("*")<br /> { }<br /><br /> public CachingChannelFactory(T localImplementation)<br /> : base("*")<br /> {<br /> LocalCacheImplementation = localImplementation;<br /> }<br /><br /> // Suppressed the rest for simplicity<br /><br /> #endregion<br /><br /> /// <summary><br /> /// Bridge cache proxy onto service proxy<br /> /// </summary><br /> /// <param name="address">Underlying service proxy address</param><br /> /// <param name="via">Underlying serivce proxy via</param><br /> /// <returns>Caching proxy</returns><br /> public override T CreateChannel(EndpointAddress address, Uri via)<br /> {<br /> T underlier = base.CreateChannel(address, via);<br /> SwitchingProxy builder = new SwitchingProxy(typeof(T), underlier, LocalCacheImplementation);<br /> T proxy = (T)builder.GetTransparentProxy();<br /> return proxy;<br /> }<br /> }<br />}<br /></pre><br />The key above is the creation of the underlying/base proxy and assignment of it as one of the parameters against our "SwitchingProxy". This is the proxy which will flip between local and remote implementations.<br /><br />Below is the implementation of "SwitchingProxy" which does the heavy lifting for us and handles invocation of our local implementation and remote proxy.<br /><br /><pre class="brush:c#">using System;<br />using System.Reflection;<br />using System.Runtime.Remoting;<br />using System.Runtime.Remoting.Messaging;<br />using System.Runtime.Remoting.Proxies;<br />using System.Security;<br /><br />namespace CA.NGommans.Wcf.Caching<br />{<br /> /// <summary><br /> /// Switching proxy provides an intermediary<br /> /// </summary><br /> [SecurityCritical(SecurityCriticalScope.Everything)]<br /> internal sealed class SwitchingProxy : RealProxy, IRemotingTypeInfo<br /> {<br /> private Type proxiedType;<br /> private object remoteProxy = null;<br /> private object localProxy = null;<br /><br /> /// <summary><br /> /// A proxy that switches between a local and remote source<br /> /// </summary><br /> /// <param name="type">type to switch</param><br /> /// <param name="remote">remote proxy</param><br /> /// <param name="local">local implementation</param><br /> public SwitchingProxy(Type type, object remote, object local)<br /> : base(type)<br /> {<br /> this.proxiedType = type;<br /> remoteProxy = remote;<br /> localProxy = local;<br /> }<br /><br /> /// <summary><br /> /// Provides mapping capabilities<br /> /// </summary><br /> /// <param name="message">Message to process</param><br /> /// <returns>Result message</returns><br /> public override IMessage Invoke(IMessage message)<br /> {<br /> IMessage result = null;<br /><br /> IMethodCallMessage methodCall = message as IMethodCallMessage;<br /> MethodInfo method = methodCall.MethodBase as MethodInfo;<br /><br /> // Check local proxy - note we ignore methods with null return types<br /> if (localProxy != null && method.ReturnType != null && method.DeclaringType.IsAssignableFrom(localProxy.GetType()))<br /> {<br /> // Invoke service call<br /> object callResult = method.Invoke(localProxy, methodCall.InArgs);<br /> if (callResult != null)<br /> {<br /> LogicalCallContext context = methodCall.LogicalCallContext;<br /> result = new ReturnMessage(callResult, null, 0, context, message as IMethodCallMessage);<br /> }<br /> }<br /><br /> // Invoke remote proxy<br /> if (result == null)<br /> {<br /> if (remoteProxy != null)<br /> {<br /> object callResult = method.Invoke(remoteProxy, methodCall.InArgs);<br /> LogicalCallContext context = methodCall.LogicalCallContext;<br /> result = new ReturnMessage(callResult, null, 0, context, message as IMethodCallMessage);<br /><br /> // Optionally set to cache if we are comply with the IRemoteResponse interface<br /> if (result != null && localProxy != null && localProxy is IRemoteResponse)<br /> {<br /> ((IRemoteResponse)localProxy).InvocationResponse(method, methodCall.InArgs, callResult);<br /> }<br /> }<br /> else<br /> {<br /> NotSupportedException exception = new NotSupportedException("Remote proxy is not defined");<br /> result = new ReturnMessage(exception, message as IMethodCallMessage);<br /> }<br /> }<br /> return result;<br /> }<br /><br /> #region IRemotingTypeInfo Members<br /><br /> /// <summary><br /> /// Checks whether the proxy that represents the specified object type can be cast<br /> /// to the type represented by the defined proxy.<br /> /// </summary><br /> /// <param name="toType">The type to cast to.</param><br /> /// <param name="o">The object for which to check casting.</param><br /> /// <returns>true if cast will succeed; otherwise, false.</returns><br /> bool IRemotingTypeInfo.CanCastTo(Type toType, object o)<br /> {<br /> bool result = true;<br /> if (!toType.IsAssignableFrom(this.proxiedType))<br /> {<br /> RealProxy objRef = RemotingServices.GetRealProxy(remoteProxy);<br /> if (objRef is IRemotingTypeInfo)<br /> {<br /> result = ((IRemotingTypeInfo)objRef).CanCastTo(toType, o);<br /> }<br /> else<br /> {<br /> result = false;<br /> }<br /> }<br /> return result;<br /> }<br /><br /> /// <summary><br /> /// Gets the fully qualified type name of the server object in a System.Runtime.Remoting.ObjRef.<br /> /// </summary><br /> string IRemotingTypeInfo.TypeName<br /> {<br /> get { return this.proxiedType.FullName; }<br /> set { }<br /> }<br /><br /> #endregion<br /> }<br />}<br /></pre><br /.<br /><br /.<br /><br />The bottom line is that we can very easily setup a proxy by instantiating an instance of CachingChannelFactory and following the same procedure as with ChannelFactory itself.<br /><br /><pre class="brush:c#">using System;<br />using System.ServiceModel;<br />using CA.NGommans.Wcf.Caching;<br />using ConsoleApplication.ServiceReference1;<br /><br />namespace ConsoleApplication<br />{<br /> class Program<br /> {<br /> static void Main(string[] args)<br /> {<br /> CachingChannelFactory<IService1> ccf = new CachingChannelFactory<IService1>(new CachedService());<br /> IService1 service = ccf.CreateChannel();<br /><br /> // Note proxy still works with IClientChannel and therefore ICommunicationObject/IDisposable contracts<br /> using (service as IClientChannel)<br /> {<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(1));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(1));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(1));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(2));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(1));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(2));<br /> Console.WriteLine("Service Proxy returned: {0}", service.GetData(2));<br /> }<br /> Console.Write("\n\nPress Enter to continue...");<br /> Console.Read();<br /> }<br /> }<br />}<br /></pre><br />As for results, here's the output outlining where the local implementation was touched:<br /><br /><strong>Output of ConsoleApplication.exe</strong><br /><div style="background-color: black; border-bottom: 1px solid black; border-top: 1px solid black; color: #bbbbbb; font-family: 'Courier New'; font-weight: bold;"><pre>==Cache lookup missed for key: 1<br /><<Saving server response to cache: You entered: 1<br />Service Proxy returned: You entered: 1<br />>>Serving data from cache for key: 1<br />Service Proxy returned: You entered: 1<br />>>Serving data from cache for key: 1<br />Service Proxy returned: You entered: 1<br />==Cache lookup missed for key: 2<br /><<Saving server response to cache: You entered: 2<br />Service Proxy returned: You entered: 2<br />>>Serving data from cache for key: 1<br />Service Proxy returned: You entered: 1<br />>>Serving data from cache for key: 2<br />Service Proxy returned: You entered: 2<br />>>Serving data from cache for key: 2<br />Service Proxy returned: You entered: 2<br /><br /><br />Press Enter to continue...<br /></pre></div><br />As you can see above, the service was only queried twice. The balance of the above queries hit our local implementation.<br /><br />The sample referenced at the top includes the code for "CachedService" which just implements the Service1 interface generated by default as part of the WcfService project template.Nick Gommans the type of a returned object in WCF[<a href="//googledrive.com/host/0B-ONgfpULonTbmx1Vlc1TE5KOEU/uploads/CodeSamples/SerializationSerrogate.zip" title="SerializationSerrogate.zip" target="_blank">Download Full Solution</a>]<br /><br />Every.<br /><br /.<br /><br /><pre class="brush:xml"><?xml version="1.0" encoding="utf-8" ?><br /><configuration><br /> <system.serviceModel><br /> <extensions><br /> <behaviorExtensions><br /> <add name="specializedType" type="CA.NGommans.Serialization.SpecializedTypeBehaviorExtensionElement, SerializationDemoClient, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/><br /> </behaviorExtensions><br /> </extensions><br />...<br /> <behaviors><br /> <endpointBehaviors><br /> <behavior name="specializedTypes"><br /> <specializedType><br /> <knownTypes><br /> <add name="BasicType" type="CA.NGommans.SerializationDemo.Client.SpecializedTypeManager, SerializationDemoClient, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/><br /> </knownTypes><br /> </specializedType><br /> </behavior><br /> </endpointBehaviors><br /> </behaviors><br /> </system.serviceModel><br /></configuration><br /></pre>The above first declares our new extension in the extensions block, then we define our new specializedType block under endpointBehaviors. This is then applied against the client endpoint.<br /><br />What this does is allow for an inherited type (helper or otherwise) to be substituted on the client side. We do this with a class such as the following:<br /><br /><pre class="brush:c#">using System.Runtime.Serialization;<br />using CA.NGommans.SerializationDemo.Client.BasicServiceReference;<br /><br />namespace CA.NGommans.SerializationDemo.Client<br />{<br /> /// <summary><br /> /// Extension type for <see cref="BasicType"/> which is our "Mine" type.<br /> /// </summary><br /> /// <remarks><br /> /// Impersonates <see cref="BasicType"/> for the purpose of serialization.<br /> /// </remarks><br /> [DataContract(<br /> /// </summary><br /> public const string/// <summary><br />/// Type manager for <see cref="BasicType"/><br />/// </summary><br />public class SpecializedTypeManager : IConvertType<br />{<br /> /// <summary><br /> /// Checks if this type is supported.<br /> /// </summary><br /> /// <param name="type">Type to anlayse.</param><br /> /// <returns>Boolean true if supported.</returns><br /> public bool CanConvertType(Type type)<br /> {<br /> bool result = false;<br /> if (type != null)<br /> {<br /> result = typeof(BasicType).IsAssignableFrom(type);<br /> }<br /> return result;<br /> }<br /><br /> /// <summary><br /> /// Attemps to convert a known type to another more specilized type<br /> /// </summary><br /> /// <param name="original">Original object to convert.</param><br /> /// <returns>Converted, more specilaized object.</returns><br /> public object ConvertType(object original)<br /> {<br /> object result = original;<br /><br /> BasicType orig = original as BasicType;<br /> if (orig != null)<br /> {<br /> if (string.Compare(orig.Type, MyType.TypeDescriptor, true) == 0)<br /> {<br /> MyType target = new MyType();<br /> target.StringValue = orig.StringValue;<br /><br /> result = target;<br /> }<br /> }<br /><br /> return result;<br /> }<br />}<br /></pre>The end result is that we can pass in/out objects that extend the BasicType class allowing for the client to have more specialized objects, regardless of if they are part of the service contract or not.<br /><br /><strong>Output of SerializationDemoClient.exe</strong><br /><div style="border-top: 1px solid black; border-bottom: 1px solid black; background-color: black; color: rgb(187, 187, 187); font-family: 'Courier New'; font-weight: bold;"><pre>Test application for IDataContractSerrogate implementation<br />----------------------------------------------------------<br />Testing Service - Round trip with BasicType:<br />----------------------------------------------------------<br />Results:<br />ObjectType:CA.NGommans.SerializationDemo.Client.BasicServiceReference.BasicType<br /> StringValue: SampleValue-Server<br /> Type: Basic<br />----------------------------------------------------------<br />Testing Service - Round trip with MyType:<br />----------------------------------------------------------<br />Results:<br />ObjectType:CA.NGommans.SerializationDemo.Client.MyType<br /> StringValue: MySpecialObjectType-Server<br /> Type: Mine<br />----------------------------------------------------------<br /></pre></div><br /.Nick Gommans | http://blog.ngommans.ca/feeds/posts/default | CC-MAIN-2018-34 | en | refinedweb |
Enabling Logging Programmatically
You can enable or disable logging programmatically by using either the Amazon S3 API or the AWS SDKs. To do so, you both enable logging on the bucket and grant the Log Delivery group permission to write logs to the target bucket.
Topics
Enabling Logging
To enable logging, you submit a PUT
Bucket logging request to add the logging configuration on the source bucket. The
request specifies the target bucket and, optionally, the prefix to be used with all
log
object keys. The following example identifies
logbucket as the target bucket
and
logs/ as the prefix.
<BucketLoggingStatus xmlns=""> <LoggingEnabled> <TargetBucket>logbucket</TargetBucket> <TargetPrefix>logs/</TargetPrefix> </LoggingEnabled> </BucketLoggingStatus>
The log objects are written and owned by the Log Delivery account, and the bucket owner is granted full permissions on the log objects. In addition, you can optionally grant permissions to other users so that they can access the logs. For more information, see PUT Bucket logging.
Amazon S3 also provides the GET Bucket
logging API to retrieve logging configuration on a bucket. To delete the logging
configuration, you send the PUT Bucket logging request with an empty
BucketLoggingStatus.
<BucketLoggingStatus xmlns=""> </BucketLoggingStatus>
You can use either the Amazon S3 API or the AWS SDK wrapper libraries to enable logging on a bucket.
Granting the Log Delivery Group WRITE and READ_ACP Permissions
Amazon S3 writes the log files to the target bucket as a member of the predefined
Amazon S3 group
Log Delivery. These writes are subject to the usual access control restrictions. You
must
grant
s3:GetObjectAcl and
s3:PutObject permissions to this group
by adding grants to the access control list (ACL) of the target bucket. The Log Delivery
group is represented by the following URL.
To grant
WRITE and
READ_ACP permissions, add the following
grants. For information about ACLs, see Managing Access with ACLs.
<Grant> <Grantee xmlns: <URI></URI> </Grantee> <Permission>WRITE</Permission> </Grant> <Grant> <Grantee xmlns: <URI></URI> </Grantee> <Permission>READ_ACP</Permission> </Grant>
For examples of adding ACL grants programmatically using the AWS SDKs, see Managing ACLs Using the AWS SDK for JavaConfiguring ACL Grants on an Existing Object and Managing ACLs Using the AWS SDK for .NET .
Example: AWS SDK for .NET
The following C# example enables logging on a bucket. You need to create two buckets, a source bucket and a target bucket. The example first grants the Log Delivery group the necessary permission to write logs to the target bucket and then enables logging on the source bucket. For more information, see Enabling Logging Programmatically. For instructions on how to create and test a working sample, see Running the Amazon S3 .NET Code Examples.
// Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: MIT-0 (For details, see.) using Amazon.S3; using Amazon.S3.Model; using System; using System.Threading.Tasks; namespace Amazon.DocSamples.S3 { class ServerAccesLoggingTest { private const string bucketName = "*** bucket name for which to enable logging ***"; private const string targetBucketName = "*** bucket name where you want access logs stored ***"; private const string logObjectKeyPrefix = "Logs"; // Specify your bucket region (an example region is shown). private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2; private static IAmazonS3 client; public static void Main() { client = new AmazonS3Client(bucketRegion); EnableLoggingAsync().Wait(); } private static async Task EnableLoggingAsync() { try { // Step 1 - Grant Log Delivery group permission to write log to the target bucket. await GrantPermissionsToWriteLogsAsync(); // Step 2 - Enable logging on the source bucket. await EnableDisableLoggingAsync(); } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } } private static async Task GrantPermissionsToWriteLogsAsync() { var bucketACL = new S3AccessControlList(); var aclResponse = client.GetACL(new GetACLRequest { BucketName = targetBucketName }); bucketACL = aclResponse.AccessControlList; bucketACL.AddGrant(new S3Grantee { URI = "" }, S3Permission.WRITE); bucketACL.AddGrant(new S3Grantee { URI = "" }, S3Permission.READ_ACP); var setACLRequest = new PutACLRequest { AccessControlList = bucketACL, BucketName = targetBucketName }; await client.PutACLAsync(setACLRequest); } private static async Task EnableDisableLoggingAsync() { var loggingConfig = new S3BucketLoggingConfig { TargetBucketName = targetBucketName, TargetPrefix = logObjectKeyPrefix }; // Send request. var putBucketLoggingRequest = new PutBucketLoggingRequest { BucketName = bucketName, LoggingConfig = loggingConfig }; await client.PutBucketLoggingAsync(putBucketLoggingRequest); } } }
More Info
Amazon S3 Server Access Logging
AWS::S3::Bucket in the AWS CloudFormation User Guide | https://docs.aws.amazon.com/AmazonS3/latest/dev/enable-logging-programming.html | CC-MAIN-2018-34 | en | refinedweb |
std::list
From cppreference.com
std::list is a container that supports constant time insertion and removal of elements from anywhere in the container. Fast random access is not supported. It is usually implemented as a doubly-linked list. Compared to std::forward_list this container provides bidirectional iteration capability while being less space efficient.
Adding, removing and moving the elements within the list or across several lists does not invalidate the iterators or references. An iterator is invalidated only when the corresponding element is deleted.
std::list meets the requirements of Container, AllocatorAwareContainer, SequenceContainer and ReversibleContainer.
[edit] Template parameters
[edit] Member types
[edit] Member functions
[edit] Non-member functions
[edit] Deduction guides(since C++17)
[edit] Example
Run this code
#include <algorithm> #include <iostream> #include <list> int main() { // Create a list containing integers std::list<int> l = { 7, 5, 16, 8 }; // Add an integer to the front of the list l.push_front(25); // Add an integer to the back of the list l.push_back(13); // Insert an integer before 16 by searching auto it = std::find(l.begin(), l.end(), 16); if (it != l.end()) { l.insert(it, 42); } // Iterate and print values of the list for (int n : l) { std::cout << n << '\n'; } }
Output:
25 7 5 42 16 8 13 | https://en.cppreference.com/w/cpp/container/list | CC-MAIN-2018-34 | en | refinedweb |
import "go.chromium.org/luci/client/downloader"
Package archiver implements the pipeline to efficiently archive file sets to an isolated server as fast as possible.
Downloader is a high level interface to an isolatedclient.Client.
Downloader provides functionality to download full isolated trees.
func New(ctx context.Context, c *isolatedclient.Client, maxConcurrentJobs int) *Downloader
New returns a Downloader instance.
ctx will be used for logging.
FetchIsolated downloads an entire isolated tree into a specified output directory.
Returns a list of paths relative to outputDir for all downloaded files.
Note that this method is not thread-safe and it does not flush the Downloader's directory cache.
Package downloader imports 11 packages (graph) and is imported by 4 packages. Updated 2018-08-14. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/client/downloader | CC-MAIN-2018-34 | en | refinedweb |
asp:review
Nevron .NET Vision
Charting & Diagramming for ASP.NET
By Steve C. Orr
A well-designed chart can crystallize mundane data into useful visualizations that lead a company to successes it otherwise may not have obtained. Likewise, a well-designed diagram can uniquely alert leaders to upcoming opportunities or foreboding threats. With Nevron s .NET Vision suite in their toolbox, developers can be empowered to dynamically create such charts and diagrams and potentially become heroes within an organization.
You might argue that it s not terribly difficult for an experienced .NET developer to use the System.Drawing namespace to create a basic bar chart. Indeed, such techniques have been covered in this very magazine. However, if you need easily implemented professional-looking charts and diagrams that go well beyond the basics, Nevron s suite might just be the perfect solution. This well-supported toolset provides maximum value with minimal effort your end users may think you worked weeks to create data visualizations that actually took only minutes to assemble.
Behind the Curtains
Underlying both the chart and the diagram controls is a common framework that handles serialization, formula processing, and other core functionality. This highly efficient document object model was thoughtfully designed in a consistent manner, so you need only to learn one system to be productive with both controls.
This core system is located in the Nevron.System.dll assembly. As with all their assemblies, a simple XCopy deployment is all that s necessary, with no registration or other potentially prohibitive server configurations necessary. If you do choose to register the components in the Global Assembly Cache, a handy MSI merge module is included for simple and convenient integration within your current deployment package(s).
The 160MB freely downloadable evaluation package has a slick installation that (with your permission) automatically adds its controls to your Visual Studio (2005 or 2008) toolbox. It also adds well-organized documentation links and sample projects to your Windows Start menu to help you get started quickly.
Nevron Chart for .NET
The chart control includes support for more than two dozen major chart types, each with multiple sub-chart options that add up to a seemingly infinite array of displayable charts and gauges. At design time, smart tags provide access to a variety of automatic configuration options, useful dialog boxes, and a built-in editor. Without the handy design-time editor shown in Figure 1, a new developer could easily become overwhelmed by all the available chart configuration options.
Figure 1: The design-time tools help developers easily configure the vast array of available charts, gauges, and diagrams.
At design time, charts can be copied to the clipboard, printed, or exported to a variety of file formats such as JPG, PNG, BMP, GIF, and TIFF. Once you have a chart configured perfectly, you can save its configuration to a file and load it again later to identically configure another chart.
The chart control supports both GDI+ and OpenGL rendering so you can get the best of both worlds. While static 2D and 3D renderings are certainly possible, more advanced users may be entranced by interactive features like dynamic AJAX data retrieval and the included data grid drilldowns, interactive charts and gauges, tooltips, and more.
Nevron Diagram for .NET
When it comes to charting controls, there are many companies competing for the attention of developers. However, Nevron s diagram control stands alone (see Figure 2). I ve never seen anything quite like it. I was amazed at the stunning variety of unique diagrams that are possible with this control. It may, in fact, be the best reason to buy Nevron .NET Vision. If you have the need to dynamically create diagrams, you might as well have a consistent and familiar charting control to use.
Figure 2: Nearly any kind of diagram imaginable can be dynamically created at run time with Nevron s diagram control.
Virtually any diagram you can imagine is possible: Illustrations, engineering drawings, housing, architecture, maps, city planning, flowcharts, programming models, org charts, supply chain diagrams, network maps, periodic tables, binary trees, triangular grids, family trees, and even origami instructions are just some of the many diagrams Nevron has demonstrated as possible on their Web site.
I strongly encourage you to peruse Nevron s diagram gallery to get your imaginative juices flowing about the kinds of diagrams you could create to help move your company forward.
Support System
In addition to their rich aforementioned galleries, Nevron also provides impressive live online demos, comprehensive feature lists, in-depth documentation, and detailed version histories. Nevron continually moves their feature sets forward with sturdy quarterly releases that provide ongoing value without breaking existing functionality or overwhelming developers with enormous new tasks.
Nevron stands behind their products, providing 30 days of free support before you even buy their product! This high-quality support is provided by people who actually helped develop the products, not just some hacks hired after the fact. After your purchase you ll receive an additional 60 days of free support. Furthermore, Nevron s Web site provides a comprehensive knowledge base, Frequently Asked Questions (FAQ) list, and online documentation.
Conclusion
Nevron .NET Vision is a powerful toolkit that empowers developers to create visually breathtaking interactive diagrams and charts dynamically at run time. The controls go together well thanks to the common framework underlying them both.
In addition to the ASP.NET support detailed in this review, Windows Forms charting and diagramming needs are also supplied so you can satisfy all sorts of end users. In fact, the full Nevron .NET Vision suite even includes the Nevron User Interface Suite. Because this sub-suite contains only Windows Forms controls, it was not evaluated for this review. However, this suite contains dozens of controls that may be useful to those of you who also do Windows Forms development, and it could be a valuable company asset.
I suggest you download your free evaluation today and try it out.
Steve C. Orr is an ASPInsider, MCSD, Certified ScrumMaster, Microsoft MVP in ASP.NET, and author of Beginning ASP.NET 2.0 AJAX by Wrox. He s been developing software solutions for leading companies in the Seattle area for more than a decade. When he s not busy designing software systems or writing about them, he often can be found loitering at local user groups and habitually lurking in the ASP.NET newsgroup. Find out more about him at or e-mail him at mailto:[email protected].
Rating:
Web Site:
Pricing: Nevron .NET Vision, starts at US$989; Chart for .NET, starts at US$299; Diagram for .NET, starts at US$589. Various discounts are available for multiple developer licenses, subscriptions, and packages that include extended support. | https://www.itprotoday.com/software-development/nevron-net-vision | CC-MAIN-2018-34 | en | refinedweb |
Miklos Szeredi <miklos@szeredi.hu> writes:> On Sat, Feb 15, 2014 at 01:37:26PM -0800, Eric W. Biederman wrote:>> >> v2: Always drop the lock when exiting early.>> v3: Make detach_mounts robust about freeing several>> mounts on the same mountpoint at one time, and remove>> the unneeded mnt_list list test.>> v4: Document the purpose of detach_mounts and why new_mountpoint is>> safe to call.>> >> Signed-off-by: Eric W. Biederman <ebiederman@twitter.com>>> --->> fs/mount.h | 2 ++>> fs/namespace.c | 39 +++++++++++++++++++++++++++++++++++++++>> 2 files changed, 41 insertions(+), 0 deletions(-)>> >> diff --git a/fs/mount.h b/fs/mount.h>> index 50a72d46e7a6..2b470f34e665 100644>> --- a/fs/mount.h>> +++ b/fs/mount.h>> @@ -84,6 +84,8 @@ extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *);>> >> extern bool legitimize_mnt(struct vfsmount *, unsigned);>> >> +extern void detach_mounts(struct dentry *dentry);>> +>> static inline void get_mnt_ns(struct mnt_namespace *ns)>> {>> atomic_inc(&ns->count);>> diff --git a/fs/namespace.c b/fs/namespace.c>> index 33db9e95bd5c..7abbf722ce18 100644>> --- a/fs/namespace.c>> +++ b/fs/namespace.c>> @@ -1359,6 +1359,45 @@ static int do_umount(struct mount *mnt, int flags)>> return retval;>> }>> >> +/*>> + * detach_mounts - lazily unmount all mounts on the specified dentry>> + *>> + * During unlink, rmdir, and d_drop it is possible to loose the path>> + * to an existing mountpoint, and wind up leaking the mount.>> + * detach_mounts allows lazily unmounting those mounts instead of>> + * leaking them.>> + * >> + * The caller may hold dentry->d_inode->i_mutex.>> + */>> +void detach_mounts(struct dentry *dentry)>> +{>> + struct mountpoint *mp;>> + struct mount *mnt;>> +>> + namespace_lock();>> + if (!d_mountpoint(dentry))>> + goto out_unlock;>> +>> + /* >> + * The namespace lock and d_mountpoint being set guarantees>> + * that new_mountpoint will just be a lookup of the existing>> + * mountpoint structure.>> + */>> + mp = new_mountpoint(dentry);>> Howabout a get_mountpoint(dentry) helper, that returns NULL if it turns out to> be not a mountpoint? And, as an added bonus, you can drop the comment above as> well.I hate to admit it but that is a nice change. Especially as it allowsremoving the d_mounpoint check inside of namespace_lock. I still need acheak d_mounpoint check outside of namespace lock but inside it can go.The first time I looked at doing that I missed something and the changelooked too awkward to be worth implementing :(Eric | https://lkml.org/lkml/2014/2/24/761 | CC-MAIN-2020-16 | en | refinedweb |
What is the proper indentation for Python multiline strings within a function?
def method(): string = """line one line two line three"""
or
def method(): string = """line one line two line three"""
or something else?
It looks kind of weird to have the string hanging outside the function in the first example.
You probably want to line up with the
"""
def foo(): string = """line one line two line three"""
Since the newlines and spaces are included in the string itself, you will have to postprocess it. If you don't want to do that and you have a whole lot of text, you might want to store it separately in a text file. If a text file does not work well for your application and you don't want to postprocess, I'd probably go with
def foo(): string = ("this is an " "implicitly joined " "string")
If you want to postprocess a multiline string to trim out the parts you don't need, you should consider the
textwrap module or the technique for postprocessing docstrings presented in PEP 257:
textwrap.dedent function allows one to start with correct indentation in the source, and then strip it from the text before use.
The trade-off, as noted by some others, is that this is an extra function call on the literal; take this into account when deciding where to place these literals in your code.
import textwrap def frobnicate(param): """ Frobnicate the scrognate param. The Weebly-Ruckford algorithm is employed to frobnicate the scrognate to within an inch of its life. """ prepare_the_comfy_chair(param) log_message = textwrap.dedent("""\ Prepare to frobnicate: Here it comes... Any moment now. And: Frobnicate!""") weebly(param, log_message) ruckford(param)
The trailing
\ in the log message literal is to ensure that line break isn't in the literal; that way, the literal doesn't start with a blank line, and instead starts with the next full line.
The return value from
textwrap.dedent is the input string with all common leading whitespace indentation removed on each line of the string. So the above
log_message value will be:
Prepare to frobnicate: Here it comes... Any moment now. And: Frobnicate! | https://pythonpedia.com/en/knowledge-base/2504411/proper-indentation-for-python-multiline-strings | CC-MAIN-2020-16 | en | refinedweb |
This Bugzilla instance is a read-only archive of historic NetBeans bug reports. To report a bug in NetBeans please follow the project's instructions for reporting issues.
Product Version = NetBeans IDE 8.1 (Build 201510222201)
Operating System = Windows 10 version 10.0 running on amd64
Java; VM; Vendor = 1.8.0_92
Runtime = Java HotSpot(TM) 64-Bit Server VM 25.92-b14
Reproducibility: null
STEPS:
* Create a new Maven project in NetBeans.
* Create the following class in the project:
public class FooBar {
public static final String FOO_M1_BAR = "foobar";
public static final String FOO_42 = "foo";
public static final String FOO_42_BAR = "bar";
}
ACTUAL:
NetBeans underlines name FOO_42_BAR and marks its line with a yellow triangle. When hovering over the triangle mark, the following hint is displayed: "Constant name does not follow naming conventions: FOO_42_BAR".
EXPECTED:
Nothing is marked as wrong | https://bz.apache.org/netbeans/show_bug.cgi?id=262614 | CC-MAIN-2020-16 | en | refinedweb |
Last Updated: Feb 13, 2020
see the beta release notes
highlight points
This beta release of the SDK builds upon the previous beta to fix a major problem with SWF files that include embedded resources, as well as to incorporate initial versions of some new features and updates.
The key changes are:
-
Removal of resource limitations for Stage3D i.e. amount of GPU memory used by texture, vertex buffers etc as describe in
-
Update geometry APIs to allow object pooling, so for example, rather than creating and returning new Point, Matrix, Vector3D etc objects, functions within the flash.geom.* classes will take an optional parameter that can be an object to reuse and return.
Some of the other areas that had been under investigation may now move into a later production release, and it is also likely that support for tvOS will be pushed out to a later release due to the large number of changes that are required in this in order to update to the 13.x SDKs.
and
3.1.3 AS3 APIs
The updated APIs for flash.geom.* shall be documented here; these can be examined by importing the airglobal.swc into an appropriate IDE or decompiler. The online documentation is currently hosted by Adobe but will be taken over by HARMAN before the end of this year.
and
3.1.4 Features
AIR-310: Remove Stage3D resource limits for apps using namespace 33.1
AIR-313: Object pooling for geometry APIs
and
3.1.5 Bug Fixes
Gamua-227: Crash in loading SWF with embedded resources
and
3.2 Known Problems
See | https://discuss.as3lang.org/t/air-sdk-beta-33-1-0-43-pre-release/2164 | CC-MAIN-2020-16 | en | refinedweb |
CIAO 4.2 Release Notes
Sherpa v2 Release
The Sherpa v2 patch to CIAO 4.2 was released on 19 July 2010.
This patch contains changes to Sherpa, CIAO's modeling and fitting package. Sherpa v2 has a number of enhancements, such as two new iterative fitting methods, refinements to parallelization, and many improvements to instrument and source models.
Support for the S-Lang scripting language has been removed from Sherpa in this release. S-Lang is still included for the CIAO tools and modules and for the ChIPS plotting application. The CXC is committed to helping existing S-Lang users transition to Python; contact Helpdesk if you need assistance.
CIAO 4.2 & CALDB Release Notes
Platform Support
Mac OS X 10.4 PowerPC
The Sherpa v2 patch is not available for the Mac OS X PPC 10.4 platform. PPC users who install CIAO 4.2 will get Sherpa v1.
Sherpa
Removal of S-Lang Support
Sherpa v2 no longer supports S-Lang. The S-Lang interface has been removed and there is no method for importing Sherpa as a S-Lang module, e.g. 'require("sherpa")', into slsh or any other S-Lang application.
As a consequence, the option to use slsh as the interpreter in the Sherpa application (the "-l" switch) has been removed.
Starting Sherpa
Sherpa uses the $HOME/.ipython-ciao/ directory to store configuration information. When Sherpa is started, you may see a message about updating this file:
ATTENTION: Out of date IPython profile for Sherpa found in: /home/username/.ipython-ciao Local version (40201) vs latest (40202). Update to latest? [Y/N] :
Unless you have changed your $HOME/.ipython-ciao/ipythonrc-sherpa file, answer "Y". The outdated file is renamed with a timestamp to preserve it.
The new profile is installed read-only. If the user wants to modify how Sherpa uses IPython functions, or call "execfile" to load Python scripts when Sherpa starts, the customization file $HOME/.ipython-ciao/ipythonrc-sherpa-user should be used.
Iterative Fitting
Two iterative fitting methods have been added to Sherpa: Primini's methods and sigma-rejection. Both were fitting methods in Sherpa 3.4, and their ports to Sherpa 4 have been completed.
The essence of an iterative fitting method is that the fit method can be called several times, until some criterion is met.
Primini's method is to re-calculate statistical errors, using the best-fit model parameters from the previous fit, until the fit can no longer be improved.
Sigma-rejection is based on the IRAF SFIT function. In successive fits, data points for which ((data - model) / error) exceeds some threshold are added to the filter, and automatically excluded from the next fit.
Primini's method and sigma-rejection can only be called when the statistic is a chi-squared statistic. They cannot be used with least-squares, Cash or C-statistic.
Several new UI functions have been added, to allow users to set the iterative fitting method, to find out what the current iterative fitting method is, and to get and set options for this method. These functions are:
- set_iter_method(<string>)
- get_iter_method_name()
- list_iter_methods()
- get_iter_method_opt(<string>)
- set_iter_method_opt(<string>, value)
If the iterative fitting method is "none" (the default value), then no iterative fitting is done - when "fit()" is called, the optimization method is called once, and Sherpa otherwise operates as expected.
The statistic and optimization methods are selected independently of the iterative fitting method - thus:
sherpa> set_stat("chi2datavar") sherpa> set_method("neldermead") sherpa> set_iter_method("primini") sherpa> fit() # Primini's method is called sherpa> set_iter_method("none") sherpa> fit() # Nelder-Mead is called once, as expected
Filtering and Showing Data
Updates to load_filter() include the ability to read FITS images that hold filter information. There is a new keyword argument 'ignore' that indicates whether the filter information should be used to notice or ignore data points. 'ignore' is False by default.
load_filter(id=1, filename, bkg_id=None, ignore=False, ncols=2) set_filter(id=1, val, bkg_id=None, ignore=False)
The file header information is no longer included in the output of show commands.
show_all() displays the correct response information for PHA data sets. Information on background datasets has also been restored in show_all().
show_bkg_model() now displays the background scale factor.
Instrument Responses
New function get_response() returns the associated PHA instrument (RMF + ARF) or any combination or iteration. This response object is callable for use in a model expression. Backgrounds are supported using the bkg_id argument. This is especially useful when dealing with multiple responses.
rsp = get_response() set_full_model(rsp(xsphabs.abs1*powlaw1d.p1))
High level functions get_arf() and get_rmf() return 'instrument' versions of the ARF or RMF dataset that include a callable functionality. This allows the user to define a response in a model expression using arf and rmf instances. The multiplication with the exposure time is implicit.
arf = get_arf() rmf = get_rmf() set_full_model(rmf(arf(xsphabs.abs1*powlaw1d.p1)))
Source and Background Models
Updates to plot_source() now support the 'factor' setting of set_analysis(). Calling plot_source() with a setting of factor=1 corresponds to the XSPEC plot eufspec, a setting of factor=2 represents eeufspec.
eufspec: E f(E) set_analysis("energy", factor=1) plot_source() eeufspec: E^2 f(E) set_analysis("energy", factor=2) plot_source() eufspec: \lambda f(\lambda) set_analysis("wave", factor=1) plot_source() eeufspec: \lambda^2 f(\lambda) set_analysis("wave", factor=2) plot_source()
Sherpa now allows the user to define model expressions that apply response matrices, or PSFs, to some models, while not applying the response or the PSF to the rest of the model expression. An example of this kind of model is an expression where a spectral model is defined in energy space, and folded through a response matrix; then, a background model defined in counts, which is not folded through the response, is added to the model expression.
The new functions, set_full_model() and set_bkg_full_model(), allow users to explicitly define instruments and convolutions that are applied to specified model components.
Legacy functionality is still supported with set_source() and set_model(); CIAO 4.2 scripts using these functions will continue to work in the current Sherpa.
Automatic Manual Definition ============= ================= set_source() set_full_model() set_model() set_bkg_source() set_bkg_full_model() set_bkg_model()
A new high level UI function add_model() that assigns a user-defined model class as a Sherpa model type. User-defined model classes that inherit from the Sherpa Arithmetic model class or other Sherpa models are accepted.
from sherpa.models import PowLaw1D class MyPowLaw(PowLaw1D): pass add_model(MyPowLaw) set_model(mypowlaw.p1)
New Sherpa models called scale1d and logparabola. The scale1d model is simply const1d with the integrate flag turned off by default. If a user sets scale1d as a model with a integrated data set, it will behave as a simple constant by default. A 2D version, scale2d, will come next week.
The logparabola model has the following form
/ x \ - p[1] - p[2] * log10(x/p[0]) f(x) = p[3] |----| \p[0]/
Bug fix: powlaw1d results corrected for gamma very close to 1.
The atten model ignores the integrate setting.
Bug fix: the refflag parameter in the XSpec model xsgrad should always be frozen.
Bug fix: show_bkg_source() was ignoring the bkg_id value.
Improved error messages during model evaluation when arrays are different sizes.
XSPEC model parameter bug fixes - flag parameters are now always frozen.
The Sherpa Model base class now includes startup() and teardown() methods for one-time initialization and cleanup. This is primary used for noticing instrument responses before and after model evaluation, but extends to all derived Model classes including UserModel and user extensions.
These methods are called once during fitting, confidence, confidence plotting. The methods are called every time with calc_stat().
PSF and Table Models
The Sherpa table model has been updated to support interpolation of data points on the data set grid from the grid supplied from file. The grids need not be of constant or comparable bin size. If the table model grid is not sorted, Sherpa will sort it in ascending order.
XSPEC-style table models are now supported using load_table_model(). Additive and multiplicative table models are supported (Atable and Mtable).
load_table_model("xstbl", "mymod.mod") set_model(xstbl)
show_psf() now hides the header information if the PSF is from file.
PSF and Table model instances now have consistent signatures to their fold() methods. fold() takes a single dataset instance.
Bug fix to correctly handle rectangular PSFs from file. Rectangular PSF images are also correctly displayed from file in DS9.
Bug fix: allow simultaneous fitting when a different PSF model is assigned to each data set in a fit.
Parallelization
Parallel evaluation of proj() and conf() is now automatically turned off on single core machines.
When Ctrl-C is issued while proj() or conf() are running, Sherpa will now correctly kill off slave processes from any parallel evaluation.
A new utility function, parallel_map(), is available in sherpa.utils. All usage of the function divide_run_parallel() have been replaced with parallel_map() (except within OptMethods). divide_run_parallel() is to be deprecated.
This function is a parallelized version of the native Python map which evaluates a function over a list of arguments.
from sherpa.utils import parallel_map parallel_map(lambda x: x*x, [1,2,3]) -> [1,4,9]
Additional Enhancements & Bug Fixes
There is a new vectorized function 'interpolate' found in the module sherpa.utils.
from sherpa.utils import interpolate yout = interpolate(xout, xin, yin, method='linear'|'nearest')
The energy and photon flux is calculated more efficiently for PHA data sets with multiple responses.
Error messages for get_arf(), get_rmf(), and get_bkg() have been improved.
Bug fix: display plot errorbars correctly when statistical error is calculated by the statistic and systematic error is set by the user.
sample_energy_flux() and the associated functions use covar() instead of proj().
set_analysis("wave") sets the background_up and background_down units to wavelength for grating PHA data.
Sherpa now uses a private DS9 window for communication, with id 'sherpa' for session imaging.
User creation of WCS objects does not require keywords to be of type numpy ndarray.
Analysis Scripts
fit_primini
The Primini iterative fitting method is included in the CIAO 4.2 Sherpa v2 release. It is no longer necessary to load the sherpa_contrib.primini module.
Documentation
Removal of S-Lang Support in Sherpa
The Sherpa documentation - including ahelp files and analysis threads - only contain Python-language syntax for the functions.
The CIAO 4.2 Sherpa v1 documentation is still available online for users who have not yet updated to Sherpa v2. | https://cxc.cfa.harvard.edu/ciao/releasenotes/ciao_4.2sh2_release.html | CC-MAIN-2020-16 | en | refinedweb |
Angular is a popular framework for creating front ends for web and mobile applications. It all started with AngularJS 1.x and then AngularJS 2, and now it's finally Angular, with the latest updates and bug fixes being worked on by the Angular team.
Components are an important part of an Angular web application. In this tutorial, you'll see how to get started with creating a web application using Angular, and you'll also get to know components.
Get started by installing Angular CLI using the node package manager (npm).
npm install -g @angular/cli
Once you have the Angular CLI installed, create a new Angular app using the CLI.
ng new angular-app
Navigate to the application folder and start the server.
cd angular-app ng serve
Point your browser to and you should have the default app running.
Navigate to the
angular-app project folder and have a look at the project structure. Here is how it looks:
Every Angular app has a root module where you define the main component to load. In the default Angular app, the root module is defined inside the
app.module.ts. When the
AppModule loads, it checks which component is bootstrapped and loads that module. As seen in the
app.module.ts, the module which is bootstrapped is
AppComponent. The
AppComponent component is defined in the file
app.component.ts.
A component is defined using the
@Component decorator. Inside the
@Component decorator, you can define the component
selector, the component
template, and the related
style.
Components are like the basic building block in an Angular application. Components are defined using the
@component decorator. A component has a
selector,
template,
style and other properties, using which it specifies the metadata required to process the component..
The best way to understand something related to programming is by actually doing. So let's start by creating an Angular component for adding two numbers. Let's call it
CalculatorComponent.
Let's start by creating a component for our calculator. Inside the
src/app folder, create a folder called
calc. This is where our
calc component will reside. Inside the
calc folder, create a file called
calc.component.html. This will be the template for our calculator component. Here is how it looks:
<h1> Calculator component </h1>
Create a file called
calc.component.ts. This is where you'll define the
calc component and specify the related metadata. You'll be defining the component using the
@component decorator. To define the component, you need to import the
component module from angular core.
import { Component } from '@angular/core';
Define the component by specifying the
template,
style, and
selector. You'll also define a
class to manage the template specified by the
@component decorator. Here is how it looks:
import { Component } from '@angular/core'; @Component({ selector: 'calc', templateUrl: 'calc.component.html', styleUrls: ['calc.component.css'] }) export class CalcComponent { }
All styles related to the component template should be defined inside the file specified in the component decorator. So create a file called
calc.component.css inside the
calc folder. You'll put the style for the calculator component inside this file.
Now that you have your component ready, let's define the component inside the root module
app.module.ts.
First import the component inside the
app.module.ts file and then include it in the
declarations section. Here is how it looks after adding the
CalcComponent:
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { CalcComponent } from './calc/calc.component' @NgModule({ declarations: [ AppComponent, CalcComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
As you can see in the root module
app.module.ts, the
AppComponent is the bootstrapped module and hence it will render by default. So, to view our calculator component, define it inside the
app.component.html. Here is how the
app.component.html file looks:
<div style="text-align:center"> <calc></calc> </div>
Save the above changes and start the server. You should be able to see the HTML content of the calculator component displayed.
Let's start by adding a template for our Angular calculator. Add the following code to the
calc.component.html file:
<div class="container"> <div class="header"> <h2> Calculator component </h2> </div> <div class="grid"> <div class="row"> <div class="col-6"> <div class="operation"> <div class="row"> <div class="col-12"> <input type="number" name="" placeholder="number"> </div> </div> <div class="row"> <div class="col-12"> <input type="number" name="" placeholder="number"> </div> </div> <div> <div class="col-12"> <button class="button"> Add </button> </div> </div> </div> </div> <div class="col-6"> <div class="result"> <span> Result </span> </div> </div> </div> </div> </div>
Add the following style to the
calc.component.css file.
.grid{ width: 100% } .row{ width: 100%; display: flex; } %; } .header{ width: 100%; background-color: #003A60; height: 100px; } .header h2{ line-height: 100px; color: #fff; } .button { background-color: #4CAF50; /* Green */ border: none; color: white; padding: 15px 32px; text-align: center; text-decoration: none; display: inline-block; font-size: 16px; margin: 4px 2px; cursor: pointer; } input{ border: none; border-bottom: 1px solid grey; width: 80%; margin: 0% 10%; padding: 5%; } .result{ background-color: #ddffff; width: 80%; margin: 20px 10px 10px 10px; height: 100px; border-left: 3px solid #2196F3; } .result span{ line-height: 100px; }
Save the above changes and you should be able to view the following user interface.
Let's add the
ngModel directive to the above displayed input text boxes. Modify the
calc.component.html code as shown below:
<div class="row"> <div class="col-12"> <input [(ngModel)]="number1" type="number" name="" placeholder="number"> </div> </div> <div class="row"> <div class="col-12"> <input [(ngModel)]="number2" type="number" name="" placeholder="number"> </div> </div>
As seen above, you have set the
ngModel for the input text boxes to the variables
number1 and
number2.
Let's define the variables inside the
CalcComponent in the
calc.component.ts file.
export class CalcComponent { public number1 : number; public number2 : number; }
Now, when the user types into the text boxes, the corresponding
ngModel variable gets updated. You can check by displaying the variable in the component's template file.
<div class="result"> <span> Number 1 : {{number1}} Number 2 : {{number2}} </span> </div>
Save the changes and enter values inside the input boxes, and you should have the data updated inside the span.
Let's add a button click to the
Add button which will calculate the sum of the
number1 and
number2 when clicked on the button.
Modify the HTML code as shown to include the click directive.
<button (click)="add()" class="button"> Add </button>
Define the
add function inside the
CalcComponent as shown:
import { Component } from '@angular/core'; @Component({ selector: 'calc', templateUrl: 'calc.component.html', styleUrls: ['calc.component.css'] }) export class CalcComponent { public number1 : number; public number2 : number; public result : number; public add(){ this.result = this.number1 + this.number2 } }
As seen in the above code, the result of the addition is being placed in a variable called
result.
Let's modify the HTML template to display the result once the variable is set.
<div class="result"> <span> Result : {{result}} </span> </div>
Save the above changes and try to add two numbers by clicking on the Add button. You will have the result displayed in the user interface.
In this tutorial, you saw how to get started with creating a web app using Angular 4. You learnt about Angular components and how to create one. You created a simple Angular component to add two numbers.
Source code from this tutorial is available on GitHub. If you’re looking for additional resources to study or to use in your work, check out what we have available on Envato Market.
Do let us know your thoughts, suggestions or any corrections… | https://www.4elements.com/blog/read/beginners_guide_to_angular_4_components | CC-MAIN-2020-16 | en | refinedweb |
1515/performing-iteration-over-each-entry-in-a-mapreturns entries in natural order of keys.
In Java 8 you can do it clean and fast using the new lambdas features:
Map<String,String> map = new HashMap<>();
map.put("SomeKey", "SomeValue");
map.forEach( (k,v) -> [do something with key and value] );
// such as
map.forEach( (k,v) -> System.out.println("Key: " + k + ": Value: " + v));
The type of k and v will be inferred by the compiler and there is no need to use Map.Entryanymore.
You can use Java Runtime.exec() to run python script, ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
String s="yourstring";
boolean flag = true;
for(int i=0;i<s.length();i++)
{
...READ MORE
You can use readAllLines and the join method to ...READ MORE
Assuming TreeMap is not good for you ...READ MORE
public class NewClass1 {
...READ MORE
Convert hashmap to an ArrayList with a ...READ MORE
for (Map.Entry<String, String> item : params.entrySet()) {
...READ MORE
int[][] multi = new int[5][];
multi[0] = new ...READ MORE
Java 8 Lambda Expression is used:
String someString ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/1515/performing-iteration-over-each-entry-in-a-map | CC-MAIN-2020-16 | en | refinedweb |
On Tue, 2014-06-03 at 10:54 -0700,.Any implementation which doesn't support XFS is unviable from a distropoint of view. The whole reason we're fighting to get USER_NS enabledin distros goes back to lack of XFS support (they basically refused toturn it on until it wasn't a choice between XFS and USER_NS). If we putthem in a position where they choose a namespace feature or XFS, they'llchoose XFS.XFS developers aren't unreasonable ... they'll help if we ask. I meanit was them who eventually helped us get USER_NS turned on in the firstplace.James | https://lkml.org/lkml/2014/6/7/118 | CC-MAIN-2020-16 | en | refinedweb |
A digital dashboard is a portal composed of Web components (called Web Parts) that can be combined and customized to meet the needs of individual users. Web Parts are reusable components that wrap Web-based content such as XML, HTML, and scripts with a standard property schema that controls how Web Parts are rendered in a digital dashboard.
This chapter explains how to build a digital dashboard that contains interactive Web Parts that respond to events generated by other parts in the same dashboard. (This chapter assumes you are familiar with Microsoft® SQL Server™ 2000, XML, scripting, and Web application development.) Dashboards support part integration through a set of services provided by the Digital Dashboard Services Component (DDSC). The DDSC includes Part Discovery, Part Notification, Session State Management, and Item Retrieval. There is an underlying object model that you can use to program the services into your code.
When building an integrated or interactive dashboard, Part Notification provides the most relevant service. Part Notification service refers to event notification and a corresponding response. Understanding how this service works is key to building interactive Web Parts. This chapter describes how to deploy this service in the context of building a simple dashboard.
A dashboard can be an arbitrary container for unrelated parts (for example, a collection of your favorite Web sites or applications arranged into a personal dashboard for easy access), or it can be a container of parts that work together by sharing, summarizing, or filtering the same data set. In the later case, the dashboard operates more like an application, with features and functionality distributed across multiple parts. This chapter describes the basic techniques you need to build exactly this kind of dashboard.
The objective of this chapter is to show you the process of creating an interactive dashboard and how to retrieve sample data from the Northwind database using the XML features in SQL Server 2000. Specifically, this chapter teaches you how to:
Create parts that get and transform XML-based data from SQL Server.
Reference an HTC file that defines HTML behaviors in your dashboard.
Use the Digital Dashboard Service Component (DDSC) to raise and respond to events occurring at the part level.
Create isolated frames that enable DDSC events to occur on the client, eliminating round trips to the server and improving security.
To illustrate these points, a Customer Information dashboard that contains two parts is created. The first Web Part presents a list of customers retrieved from Northwind. The second Web Part is a bar chart that shows order volume by year for a specific customer that you select. When the user clicks a value in the customer list in the first part, the DDSC raises an event that causes the second part to get and display summarized order data about that customer.
The actual dashboard and Web Part definitions will be created by you. The code samples included with this chapter provide the Web-based content that you use to create the Web Parts. Code for this chapter is provided on the SQL Server 2000 Resource Kit CD-ROM. Each step of the process is explained, and the tools and software you need to perform each step are identified.
To follow the steps in this chapter, you must have SQL Server 2000 running on Microsoft Windows® 2000, and the Digital Dashboard Resource Kit (DDRK) 2.01. From the DDRK, you must also install the SQL Server sample Administration digital dashboard. The sample dashboard provides a way to create dashboards and parts.The sample Administration dashboard is used to define the dashboard and parts described in this chapter.
In the process of creating the dashboard, you will need to do the following:
Ensure that your SQL Server 2000 installation supports SQL Server authentication.
Install the DDRK 2.01.
Install the SQL Server sample Administration dashboard from the DDRK.
Create virtual and physical directories to store the code sample files.
Copy the files to the directories you created in the previous step.
Edit the files to correct server name and path information.
Define a dashboard using the sample Administration digital dashboard.
Define a Customer List Web Part.
Define a Customer Order Chart Web Part.
Code samples provide the content of the Web Parts you will create. Web Part content can be XML, HTML, or scripts that get and transform data or that define events and behaviors. You can put the content in separate files that you reference or you can type it directly into the Web Part definition. For this exercise, the content is provided in files. Note that a single Web Part can use multiple files to supply functionality.
Code samples provided with this chapter include the following:
Customerlist.htm (provides content for the Customer List Web Part).
Customerlist.xml (contains an XML-based SQL Server query. This query gets a list of company names from the Customers table in Northwind).
Customerlist.xsl (transforms the company names in the Customer List Web Part).
Customerlist.htc (defines mouseover, mouseout, and click events for the Customer List Web Part).
Orderchart.htm (provides content for the Order Chart Web Part).
Orderchart.xsl (transforms order data for a specific customer).
The code sample files are commented to help you interpret the purpose and intent of the code. Snippets from these files appear in this chapter to illustrate key points.
Note Code samples require editing before you can use them. Many of the files contain placeholder values for your Microsoft Internet Information Services (IIS) server and virtual directories. Where indicated in the instructions, you need to replace the placeholder values with values that are valid for your computer.
This chapter requires Microsoft SQL Server 2000, Microsoft Windows 2000, Internet Explorer 5.0 or later, and the Digital Dashboard Resource Kit (DDRK) 2.01.
SQL Server 2000 is required because it includes XML support for exposing relational data as XML. In the sample dashboard you create, you access Northwind as XML from your Web browser by way of a virtual directory. SQL Server 2000 provides a tool for configuring a virtual directory for the Northwind database. Instructions for configuring this directory are covered later in this chapter.
If you install the DDRK on the same computer as SQL Server, your SQL Server installation needs to support SQL Server authentication. Hosting a dashboard and a SQL Server on the same computer means that the Web server (IIS) and SQL Server need to talk to each other. Having both the Web Server and SQL Server use the same integrated authentication mode results in a security violation; the Web server will be prevented from issuing a query to a SQL query when both servers reside on the same computer. To be able to query Northwind from your development computer, you need to use SQL Server authentication. Note that if SQL Server authentication is not enabled, you may need to reinstall SQL Server, selecting SQL Server authentication during the install process.
If SQL Server and the DDRK are installed on different computers, you can use whatever authentication mode you like. For more information about supported platforms and installation, see SQL Server Books Online.
For this chapter, Windows 2000 and IIS 5.0 are required on the server hosting the digital dashboard. This means that the computer on which you install the DDRK must be running some edition of Windows 2000 server.
Clients do not require Windows 2000. Client platforms include any edition of Windows 2000, Windows NT®, and Windows 98.
Viewing the dashboard and processing the underlying XML requires Internet Explorer 5.0 or 5.5.
Dashboard development starts with the DDRK, which provides the design-time framework and run-time components you need to deploy dashboards and parts. The DDRK 2.01 provides information and development resources. To learn about dashboards, you can read white papers, reference material, and overviews. Development resources include sample Administration digital dashboards that you can analyze to further your understanding of dashboard functionality.
More important, the sample Administration digital dashboards offer real functional value¯installing a sample dashboard simultaneously installs digital dashboard components, such as the dashboard factory, the DDSC, and dashboard storage support. The sample Administration digital dashboards also provide a user interface for creating new and modifying existing dashboards and parts, as well as the ability to set properties that control access and presentation.
The DDRK contains several sample Administration digital dashboards. For this chapter, we assume you are using the SQL Server Sample Administration Digital Dashboard. You will use this dashboard to create your own dashboard as well as define the Customer List and Order Chart parts.
You can download and install the DDRK 2.01 from.
To install the SQL Server Sample Digital Dashboard, open the DDRK and go to Building Digital Dashboards. Choose Install the Microsoft SQL Server 7.0 Sample Digital Dashboard (note that this sample dashboard is fully compatible with SQL Server 2000).
During installation, you will be asked to create a new SQL Server database to store the dashboards and parts you create. When defining the login to this database, use sa for the user name and leave the password blank.
After installation completes, the Welcome page of the SQL Server Sample Administration Digital Dashboard appears (note the HTTP address for future reference). Click Administration to open the Administration page. This is the page you will use later to define a new dashboard and Web Parts.
This section explains how to get files into the right places and configure virtual directories.
The code samples for this chapter are available on the SQL Server 2000 Resource Kit CD-ROM in the folder, \ToolsAndSamples\DigitalDashboard. There are six files altogether.
In the next several steps, we will tell you where to place the files and which files need editing.
Use Windows Explorer to create a physical directory in your Default Web Site directory. By default, the path is C:\Inetpub\Wwwroot. To this path, you can add a subdirectory named Tutorial, resulting in this path: C:\Inetpub\Wwwroot\Tutorial.
Into this directory, copy the following code sample files:
Customerlist.htm
Customerlist.htc
Orderchart.htm.
Use Internet Services Manager to create a new virtual directory under Default Web Site for your HTM and HTC files. In Windows 2000, this tool is located in the Administrative Tools program group. To create a virtual directory, right-click Default Web Site, and then click New Virtual Directory. To match the path names used in the code samples, name your virtual directory Tutorial.
To issue an SQL query through HTTP, you need to configure Northwind as a virtual directory. To do this, you use the Configure SQL XML Support in IIS tool, located in the Microsoft SQL Server program group. Instructions that describe this process in detail are provided in the topic "Creating the nwind Virtual Directory" in SQL Server Books Online. You should follow the instructions exactly. When you are finished, you should have the following physical directories:
\Inetpub\Wwwroot\nwind
\Inetpub\Wwwroot\nwind\schema
\Inetpub\Wwwroot\nwind\template
For each physical directory, you should have a corresponding virtual directory of the same name.
Into the \Inetpub\Wwwroot\nwind\template directory, copy the following code sample files:
Customerlist.xml
Customerlist.xsl
Orderchart.xsl
Note The nwind virtual directory is accessed by SQL Server when it retrieves data. The application virtual directory that you use to store the HTM and HTC files is accessed by the dashboard. This is why you need separate directories for each group of files.
After you copy all the files, you can adjust the server name and paths in the code sample files. In all cases, replace <
your server name
> with the name of your IIS server, correcting the virtual path names if necessary. Use the proper name rather than localhost for the server name. Using localhost results in permission denied errors when you add Web Parts later in the tutorial.
Open Customerlist.htm from the Tutorial folder using an HTML or text editor.
Edit the path in the IFRAME element: <IFRAME ID="CustFrame" SRC="http://<
your server name
>/nwind/template/customerlist.xml".
Save and close the file.
Open Orderchart.htm from the Tutorial folder using an HTML or text editor.
Edit the path in the SRC property of the ChartFrame object: document.all.ChartFrame.src = "http://<
your server name
>/Nwind?xsl=…".
Open Customerlist.xsl from the Template folder using an HTML or text editor.
Edit the path in td style 1048528364element1048528364: td {behavior:url(http://<
your server name
>/tutorial/customerlist.htc)}.
This section tells you how to use the Administration sample dashboard to define a new dashboard and the parts that go in it.
A dashboard is a container for Web Parts. It is defined by a schema and supports properties that determine dashboard appearance and behavior. To create the Customer Information dashboard, you start by defining a new dashboard.
In your browser, open the Administration page of the SQL Server Sample Digital Dashboard. The default address is http://<your server name>/Dashboard/Dashboard.asp?DashboardID=http://<your server name>/Sqlwbcat/Welcome/Administration.
In the Dashboard View pane, select Sqlwbcat, and then click New to define a new dashboard. Sqlwbcat is the default name of both the SQL Server database and IIS extension that manages dashboard and part storage. The dashboard that you define will be stored and managed by Sqlwbcat.
In the Dashboard Properties pane, replace the default name NewDashboard1 with CustomerInfo, and then replace the default title New Dashboard with Customer Information Dashboard.
If you wish, choose a different predefined stylesheet.
Click Save. The CustomerInfo dashboard is added to the list of dashboards for Sqlwbcat.
To test your progress so far, open your browser and paste this Address: http://<your server name>/Dashboard/Dashboard.asp?DashboardID=http://<your server name>/Sqlwbcat/CustomerInfo. You should see an empty dashboard, correctly titled and styled, with the Content, Layout, and Settings items in the top right corner.
Save this URL in your Favorites list so that you can view the changes as you add each part.
The Customer List Web Part contains a list of customers, identified by Company Name. The content for this Web Part is an HTM file.
In your browser, open the Administration page of the SQL Server Sample Digital Dashboard. In the Dashboard View pane, select the CustomerInfo dashboard.
Scroll down to the Web Part List pane, and then click New to define a new part.
In the General tab of Web Part Properties, do the following four things:
Replace the default name NewPart1 with CustomerList.
Replace the default title NewPart1 with Customer List.
Select Left Column for the position on the page.
Set Fixed Size to a fixed height of 500 pixels. This shows more rows in the Customer List.
Click the Advanced tab.
Choose HTML for the Content Type.
In Content Link, type the following: http://<your server name>/tutorial/customerlist.htm
Click Save.
Note that if you subsequently change any properties (for example, to adjust the part position or change the title), the values you entered for fixed height will migrate to the fixed width fields. This bug will be fixed in a subsequent release. The workaround for now is to redo the fixed height, and then click no to disable the fixed width.
To test your progress so far, open or refresh the Customer Information dashboard in your browser. The Customer List Web Part should appear in the dashboard.
The Order Chart Web Part is an HTML file that contains summarized order data for the customer selected in the Customer List Web Part.
In your browser, open the Administration page of the SQL Server Sample Digital Dashboard, then select the CustomerInfo dashboard.
Replace the default name NewPart1 with OrderChart.
Replace the default title NewPart1 with Order Chart.
Select Right Column for the position on the page.
Set Fixed Size to a fixed height of 350 pixels to give the part more room.
In Content Link, type the following: http://<your server name>/tutorial/orderchart.htm
After you add the two parts, the dashboard is ready to use. Open the Customer Information dashboard in your browser. Click a Company Name in the Customer List Web Part. The Order Chart Web Part responds by querying Northwind for order information about the customer, and then aggregating that information into a set of values that can be represented by a bar chart. The name of the customer you select appears above the chart. The following sections detail the events and actions occurring behind the scenes that create the appearance and behavior you see in this dashboard.
This section highlights the more interesting aspects of the code samples. Each file is discussed separately. The following table describes the role of each file.
File
Description
Customerlist.htm
Creates a structure for the part.
Customerlist.xml
Gets customer data.
Customerlist.xsl
Transforms data by selecting it and applying HTML.
Customerlist.htc
Adds dynamic HTML behaviors, including definitions for the onclick event used to raise an event notification. This notification is received by the Order Chart Web Part.
Orderchart.htm
Creates a basic structure for the part, gets data by building a query that includes a Company Name passed through the onclick event defined in Customerlist.htc.
Orderchart.xsl
Transforms the data by selecting it and applying HTML. The bars in the bar chart are dynamically sized based on the amount of annual orders. Two functions different functions are used to calculate these values.
This HTML file provides the content for the Customer List Web Part. It contains a reference to the Customerlist.xml file, which in turn contains a reference to Customerlist.xsl, which references the Customerlist.htc file.
The Customerlist.htm file defines an isolated frame to contain Customer data from Northwind. Although you can isolate Web Parts in the Web Part definition, using this approach (that is, manually creating IFRAME elements) offers more security and allows you to invoke the DDSC at the part level.
Invoking the DDSC at the part level means that you can control other Web Parts (in this case, the Order Chart Web Part) from script inside an IFRAME. To do this, you create a variable named DDSC in the IFRAME content and then set its value equal to the DDSC that exists outside of the frame (that is, the DDSC instance for the dashboard). You can then use the DDSC variable to communicate with other parts.
In this example, a DDSC variable is declared in the source for the IFRAME (that is, in the Customerlist.xsl file, which in turn is referenced by the Customerlist.xml file, which provides the content to the IFRAME element).
This approach works because a parent can access an IFRAME (note that the reverse case of IFRAMEs accessing parents is not true). In this case, the DDSC instance at the dashboard level can access the IFRAME content you define and participate in the script that you associate with a given IFRAME element.
In the code snippet below, the IFRAME 1048528365ID 10485283661048528365attribute 1048528366is defined so that you can reference the frame in script.
Next, the IFRAME SRC attribute specifies the XML template file containing the Northwind query. This file is used to populate the frame with a scrollable list of Company Names. The names are retrieved from Northwind when the dashboard loads. Note that UTF-16 encoding is needed to accurately display foreign language characters in the data.
Finally, the IFRAME HEIGHT and WIDTH attributes expand the frame so that it occupies all of the available space of the Web Part.
<IFRAME ID="CustFrame"
SRC="http://<server>/nwind/template/customerlist.xml?contenttype=text/html&outputencodin
g=UTF-16" HEIGHT="100%" WIDTH="100%">
</IFRAME>
Further on in this file, you find a script block that instantiates a DDSC instance at the frame level, using the value of the IFRAME ID. The DDSC is one of the objects used to implement the Part Notification service. It exposes methods that both raise and respond to event notifications.
CustFrame.ddsc= DDSC;
This XML template file issues an SQL SELECT statement through IIS using the nwind virtual directory you configured earlier. Specifying the nwind virtual directory is equivalent to specifying the Northwind database (recall that this specification is part the value for the IFRAME SRC attribute in Customerlist.htm).
The root element defines a namespace and the XSL file used to transform the result set. The query statement is a child of the root element.
<root xmlns:
<sql:query>
SELECT CompanyName FROM Customers FOR XML AUTO
</sql:query>
</root>
This XSL file transforms the XML result set so that it appears in the page. It defines a template pattern that finds all Customer nodes and gets the value of the Company Name. The Company Name is inserted into a TD element in the order returned by the query.
In the code snippet below, the STYLE element defines CSS styles for TH and TD elements.
The STYLE TH element is styled with a gray background color.
The STYLE TD element calls an HTC file that combines style attributes with script to produce dynamic HTML for the content in each TD element.
<STYLE>
TH {background-color:#CCCCCC}
TD {behavior:url(http://<server>tutorial/customerlist.htc)}
</STYLE>
This file also declares a variable for DDSC. This variable is used in the Customerlist.htm file to invoke the DDSC object for an IFRAME element. Note that this declaration was discussed previously, in the Customerlist.htm section.
<script language="JScript">
var DDSC;
</script>
The Customer List Web Part is programmed for three events: onmouseover, onmouseout, onclick.
Onmouseover and onmouseout define rollover behavior.
Through the Click function, the onclick event instantiates the DDSC object at the part level. Clicking a company name raises an event (that is, broadcasts an event notification to other parts in the same dashboard). The RaiseEvent method is a method of the DDSC object.
function Click() {
ddsc.RaiseEvent("URN:Customer", "SelectCustomer", this.innerHTML);
}
The URN:Customer parameter is a user-defined namespace that you can create to provide a context for the event. For example, in any given application you may have multiple Click functions. Using a namespace provides a way to distinguish between click events that occur in an Employee form, a Customer list, or an Order bar chart.
The SelectCustomer parameter is an event name. This is a user-defined name that identifies the event to other Web Parts that respond to this event. Script attached to the responding Web Part (that is, the Order Chart) refers to the same event name when registering for the event.
The this.innerHTML parameter is an event object. This is the object upon which the function operates. In this case, it is a specific Company Name that the user clicks on. This value is passed as part of the event notification, making it available to other parts that want to use it.
This file provides the content for the Order Chart Web Part. The file contains an SQL SELECT statement issued through IIS using the nwind virtual directory you configured earlier. The query is multipart, using a combination of fixed strings and a Company Name value that is passed in as a parameter. The data that is returned is total order volume for a single customer, grouped by year. Clicking a different customer in the Customer List issues another query against the database, using new values that correspond to the selected customer. The return values are used to update the contents of the Order Chart.
The code that relates the Order Chart to the Customer List Web Part is the following:
DDSC.RegisterForEvent("URN:Customer", "SelectCustomer", this.innerHTML);
The SelectCustomer parameter is the event name, and this.innerHTML is the event object.
As with the Customer List, an isolated frame is used to contain the data. The IFRAME element is defined as follows:
<IFRAME ID="ChartFrame" WIDTH="100%" FRAMEBORDER="0" NORESIZE</IFRAME>
The onSelectCustomer function provides the code that creates the multipart query. (Note that the first several lines of this function are used to search and replace special characters like ampersands and apostrophes to XML or HTTP equivalents). The query is specified through the SRC parameter of the IFRAME element by way of the document object model.
document.all.ChartFrame.src = "http://<server>/nwind?xsl=template/orderchart.xsl&contenttype=text/html&outputencoding=
UTF-
16&sql=Select+datepart(year,%20Orders.OrderDate)+as+Year,Sum([order%20details].UnitPrice
*[order%20details].Quantity)+as+OrderTotal+from+[order
details]+inner+join+Orders+on+[order%20details].OrderID=Orders.OrderID+inner+join+Custom
ers+on+Orders.CustomerID=Customers.CustomerID+where+customers.companyname='"
+customerName +"'+group+by+datepart(year,%20Orders.OrderDate)+FOR+XML+RAW&root=root";
In this query, an XSL file and encoding attribute are specified before the SELECT statement.
The SELECT statement itself is articulated in HTTP syntax. Because the query contains a dynamic element (CustomerName, which is the value passed in as "this.innerHTML" and it varies each time the user clicks a Company Name), a static XML template file could not be used. Passing the SQL query as a string provides a way to combine static and dynamic elements together.
This file transforms the XML result set returned for the Order Chart, creating the bar chart and displaying customer information based on an SQL query. This file is referenced in the HTTP statement for the SRC parameter.
The bar chart is simple HTML (in this case, TD elements in a table) and it shows differences among annual order volumes for a specific customer. To get differences in bar color and size, different attributes on the TD element are set. These attributes are BACKGROUND-COLOR and WIDTH. WIDTH is an XSL attribute (name=style) that is attached to the TD element. The value of WIDTH is calculated through script.
Color coding is based on the year (year values are detected through XSL). Because there are only three years worth of data in the Northwind database, we get by with XSL test cases that detect 1996, 1997, and 1998.
<xsl:attributewidth:<xsl:eval>getOrderPercent(this)</xsl:eval>;
<xsl:choose>
<xsl:whenbackground-color:red</xsl:when>
<xsl:whenbackground-color:blue</xsl:when>
<xsl:otherwise>background-color:purple;</xsl:otherwise>
</xsl:choose>
Sizing is based on order volume. In Northwind data, order volumes vary from two-digit to five-digit values. The wide range makes it difficult to scale the bars using fixed values (a bar chart based on pixels would need to accommodate bars that are 42 pixels long and 64,234 pixels long). To work around this, we use percentages. Percentage values show relative rather than absolute differences in the order volumes. For a specific customer, each annual volume (for 1996, 1997, or 1998) is some percentage of the combined three-year volume. To get the three different WIDTH values needed for the three bars in the bar chart, we use two functions.
The getOrderPercent function calculates the value of the TD WIDTH attribute by dividing an Order Total by the sum of all Order Totals. This function is called from an xsl:eval element (as shown in the first line of the previous code snippet).
The getOrderTotal function sums the Order Totals into one lump sum. This sum becomes the denominator in the getOrderPercent function.
Both functions are reproduced here in their entirety:
var nTotal = 0;
function getOrderPercent(nNode) {
var nPercent;
if (nTotal == 0)
nTotal=getOrderTotal(nNode.ParentNode);
nPercent=Math.round((nNode.getAttribute("OrderTotal") / nTotal) * 100) + '%';
return nPercent;
}
function getOrderTotal(nNode) {
var sum=0;
var rows=nNode.selectNodes("row");
for (var i = rows.nextNode(); i; i = rows.nextNode())
sum += parseInt(i.getAttribute("OrderTotal"));
return sum;
} | http://technet.microsoft.com/en-us/library/cc917654.aspx | crawl-002 | en | refinedweb |
Wiki Science/Wikiresearch/Presentation
From Wikibooks, the open-content textbooks collection
[edit] Wikiresearch
The highly successfull Wikipedia is not the proper place for original research.
Hence the the idea of Wikiresearch, a project to do wikistyle scientific research: collaborative and under a free license.
- Existing free text projects
- Free information and free software
- Scientific institutes and the market
- Wikiresearch - practical
- Social aspects
[edit] Existing free text projects
[edit] Wikipedia
- Wikipedia is a free encyclopedia being written collaboratively by many voluntary contributors from all over the world. Since 2001 over 260.000 English language and over 90.000 German articles have been written.
- Wiki: anyone with an internet connection can edit any article except for a few protected pages.
- Wikipedia's parent organization is the Wikimedia Foundation, a non-profit corporation organized under the laws of Florida. Copyrights of edits are retained by contributors.
- Articles are licensed under the GFDL and the MediaWiki software that the project runs on is released under the GPL. This makes sure the encyclopedia remains free. Anyone could start a mirror or a fork.
[edit] MediaWiki
MediaWiki has quite some advantages over other wiki systems.
- No use of UpperCaseLinks or little icons to indicate existence of a link;
- Colors of links indicate whether it's an external link or an existing or still non-existing article;
- Edit history has an adequate diff function, which facilitates maintenance and makes eradicating wiki vandalism a snap;
- Modular WikiTeX system: possibility to incorporate rendered TeX objects;
- Section editing;
- Image rescaling;
- Message transclusion;
- Use of namespaces to separate articles from discussions (Talk page), user pages and messages;
- Easy to find information about the structure, such as relationships between pages, wanted pages, pages with many or no links.
[edit] Some more free collaborative projects
- Wikibooks, for textbooks
- Wikiversity, a brand new "project geared towards learning"
- Wiktionary
- Disinfopedia: information about "PR firms, think tanks, industry-funded organizations and industry-friendly experts influencing public opinion and public policy on behalf of corporations, governments and special interests."
-.
- CorpKnowPedia: Charting the corporate landscape.
- Wikitravel: travel guide under CC-BY-SA.
[edit] No original research
Wikipedia is not the place for original research such as "new" theories.
Not a primary source, but a
- secondary source: one that analyzes, assimilates, evaluates, interprets, and/or synthesizes primary sources; or a
- tertiary source: one that generalizes existing research or secondary sources of a specific subject under consideration.
A Wikipedia entry is a report not an essay.
[edit] Free information and free software
Threshold of joining free information projects is much lower.
Differences:
- No weird syntax;
- Typos and spelling errors won't crash a computer;
- And many people can correct typos;
- Not necessary to be an expert.
Similarities:
- Translations into many languages;
- Requires an internet connection;
Both free software and wiki projects evolve like stone soup. But much more people are capable of adding their flavour to wikis. Wikipedia: gigantic soup containing many ingredients, constantly boiling, and people constantly throwing in new ingredients.
[edit] Scientific institutes and the market
Neoliberal trend to private financing of science; according to neo-liberal theory this would lead to more useful knowledge and products.
However it leads to:
- Bias of resulting findings;
- Less cooperation, especially in commercially viable fields of science;
- Restrictions on results:
- non-free software;
- patents, possibly on software;
- file formats (such as MP3);
- knowledge about organisms (basmati rice), medecine.
[edit] Free software
In 1970s developers, who were.
At present scientific institutes make use of free software, however often also of proprietary software: e.g. MATLAB, on a GNU/Linux OS. No urge to move away from proprietary to collaborative model.
Scientists do create free software though, as paid work but more often in spare time.
[edit] Publications
- Many scientific articles are exploited by publishers, with restrictions on use.
- More and more articles can be accessed online.
- Often online access is paid.
- Redistribution is hardly ever possible.
- Modification or reuse beyond simple quoting is out of the question.
Often publications discuss results obtained with software that is not freely available, only vaguely sketched in the article. This is problematic for falsificating or repeating the results.
Even inside institutions there is sometimes limited cooperation.
[edit] Wikiresearch - practical
First experiment of Wikiresearch has already been started at Wikibooks: Wiki Science, a study of the way wikis grow, change and adapt.
- Putting articles in Wikiresearch is to be encouraged;
- published as well as unfinished articles;
- articles rejected by journals and conferences might contain slightly too original research and can be improved at Wikiresearch.
Accompanying source code should definitely not be under the GFDL. It is better to use a copylefted free software license, preferably the GPL.
[edit] Maintainership and responsibility
MediaWiki software allows to restore previous versions and see which user (or IP address) changed what. Up to now the various Wikipedia projects have experienced relatively little wiki vandalism.
Regular users with a login often check what has been changed and by whom, where anonymous edits are regarded with a bit more suspicion. Certain Wikipedia pages are attractive to vandalists, but these pages are also checked very often by regular contributors.
For original research things could be a bit more tricky. Time and a functional system will tell how much of a problem vandalism will be on a research wiki.
[edit] Author-itarian vs. anarchistic
Having only a bunch of main authors, or rather authorities would go against the wiki principle and it probably wouldn't lead to the amount of participation (and quality) attainable with a more anarchistic model.
[edit].
[edit].
[edit] Social aspects
[edit]., though on discussion pages it is very common to sign (leaving nick and date).
- Not attractive for most established scientists;
- Attractive to:
-.
[edit] Related
A lot of work can be done on improving Wikipedia articles, that can also be used as a clear definition for general concepts.
Results could take the form of a Wikibook, be used on Wikiversity, or even on Wikipedia.
wikibooks.org/wiki/Wikiresearch | http://en.wikibooks.org/wiki/Wiki_Science:Wikiresearch/presentation | crawl-002 | en | refinedweb |
This appleve.
The list and exact details of the "RPython" restrictions are a somewhat evolving topic. In particular, we have no formal language definition as we find it more practical to discuss and evolve the set of restrictions while working on the whole program analysis. If you have any questions about the restrictions below then please feel free to mail us at pypy-dev at codespeak net. but yield, for loops restricted to builtin types
range
range and xrange are identical. range does not necessarily create an array, only if the result is modified. It is allowed everywhere and completely implemented. The only visible difference to CPython is the inaccessability of the xrange fields start, stop and step.
definitions
run-time definition of classes or functions is not allowed.
generators
generators are not supported.
exceptions
We are using
integer, float, string, boolean
a lot of, but not all string methods are supported. When slicing a string it is necessary to prove that the slice start and stop indexes are non-negative.. String keys have been the only allowed key types for a while, but this was generalized. After some re-optimization, the implementation could safely decide that all string dict keys should be interned.
list comprehensions
may be used to create allocated, initialized arrays. After list over-allocation was introduced, there is no longer any restriction.
functions
objects
in PyPy, wrapped objects are borrowed from the object space. Just like in CPython, code that needs e.g. a dictionary can use a wrapped dict and the object space operations on it.
This layout makes the number of types to take care about quite limited.
While implementing the integer type, we stumbled over the problem that integers are quite in flux in CPython right now. Starting on Python 2.2, integers mutate into longs on overflow. However, shifting to the left truncates up to 2.3 but extends to longs as well in 2.4..
ovfcheck_lshift()
ovfcheck_lshift(x, y) is a workaround for ovfcheck(x<<y), because the latter doesn't quite work in Python prior to 2.4, where the expression x<<y will never return a long if the input arguments are ints. There is a specific function ovfcheck_lshift() to use instead of some convoluted expression like x*2**y so that code generators can still recognize it as a single simple operation. has the advantage that it is runnable on standard CPython. That means, we can run all of PyPy with all exception handling enabled, so we might catch cases where we failed to adhere to our implicit assertions.
Pylint is a static code checker for Python. Recent versions (>=0.13.0) can be run with the --rpython-mode command line option. This option enables the RPython checker which will checks for some of the restrictions RPython adds on standard Python code (and uses a more agressive type inference than the one used by default by pylint). The full list of checks is available in the documentation of Pylin.
RPylint can be a nice tool to get some information about how much work will be needed to convert a piece of Python code to RPython, or to get started with RPython. While this tool will not guarantee that the code it checks will be translate successfully, it offers a few nice advantages over running a translation:
Note: if pylint is not prepackaged for your OS/distribution, or if only an older version is available, you will need to install from source. In that case, there are a couple of dependencies, logilab-common and astng that you will need to install too before you can use the tool..
We are thinking about replacing OperationError with a family of common exception classes (e.g. AppKeyError, AppIndexError...) so that we can more easily catch them. The generic AppError would stand for all other application-level classes.
Modules visible from application programs are imported from interpreter or application level files. PyPy reuses almost all python modules of CPython's standard library, currently from version 2.5.2. We sometimes need to modify modules and - more often - regression tests because they rely on implementation details of CPython.
If we don't just modify an original CPython module but need to rewrite it from scratch we put it into pypy/lib/ as a pure application level module.
When we need access to interpreter-level objects we put the module into pypy/module. Such modules use a mixed module mechanism which makes it convenient to use both interpreter- and applicationlevel parts for the implementation. Note that there is no extra facility for pure-interpreter level modules, you just write a mixed module and leave the application-level part empty.
You can interactively find out where a module comes from, when running py.py. here are examples for the possible locations:
>>>> import sys >>>> sys.__file__ '/home/hpk/pypy-dist/pypy/module/sys/*.py' >>>> import operator >>>> operator.__file__ '/home/hpk/pypy-dist/pypy/lib/operator.py' >>>> import opcode >>>> opcode.__file__ '/home/hpk/pypy-dist/lib-python/modified-2.5.2/opcode.py' >>>> import os faking <type 'posix.stat_result'> faking <type 'posix.statvfs_result'> >>>> os.__file__ '/home/hpk/pypy-dist/lib-python/2.5.2/os.py' >>>>.
pypy/lib/
contains pure Python reimplementation of modules.
lib-python/modified-2.5.2/
The files and tests that we have modified from the CPython library.
lib-python/2.5.2/
The unmodified CPython library. Never ever check anything in there....
You can go to the pypy/lib/test2 directory and invoke the testing tool ("py.test" or "python ../../test_all.py") to run tests against the pypy/lib hierarchy. Note, that tests in pypy/lib/test2 are allowed and encouraged to let their tests run at interpreter level although pypy/lib/ modules eventually live at PyPy's application level. This allows us to quickly test our python-coded reimplementations against CPython.
Simply change to pypy/module or to a subdirectory and run the tests as usual.
In order to let CPython's regression tests run against PyPy you can switch to the lib-python/ directory and run the testing tool in order to start compliance tests. (XXX check windows compatibility for producing test reports).
write good log messages because several people are reading the diffs..
We have a development tracker, based on Richard Jones' roundup application. You can file bugs, feature requests or see what's going on for the next milestone, both from an E-Mail and from a web interface.
If you already committed to the PyPy source code, chances are that you can simply use your codespeak login that you use for subversion or for shell access.
If you are not a commiter then you can still register with the tracker easily.
Our tests are based on the new py.test tool which lets you write unittests without boilerplate. All tests of modules in a directory usually reside in a subdirectory test. There are basically two types of unit tests:.
You can write test functions and methods like this:
def test_something(space): # use space ... class TestSomething: def test_some(self): # use 'self.space' here
Note that the prefix test for test functions and Test for test classes is mandatory. In both cases you can import Python modules at module global level and use plain 'assert' statements thanks to the usage of the py.test tool.:
You can run almost all of PyPy's tests by invoking:
python test_all.py file_or_directory
which is a synonym for the general py.test utility located in the pypy directory. For switches to modify test execution pass the -h option.. | http://codespeak.net/pypy/dist/pypy/doc/coding-guide.html | crawl-002 | en | refinedweb |
This document will shortly be revised to exclude details of the mxODBC 1.x series.
mxODBC is a package for the Python programming language, providing connectivity to relational database management systems (RDBMSs) using the ODBC standard. This document aims to explain how to configure mxODBC on UNIX systems.
Note that in this document I will use the term "database system" to mean RDBMS unless otherwise stated. This seems to be a common application of the term.
Author: Paul Boddie (paul@boddie.org.uk)
First, read the instructions for the database systems you intend to use mxODBC with. Only after making the appropriate adjustments to the configuration files should you attempt to install mxODBC as this is a genuine installation process which will typically put files in special places within your Python installation.
Once you have made the appropriate adjustments, follow the instructions in
the mxODBC documentation. In brief, you will need to install the
egenix-mx-base-2.0.0 package in order to make the mxDateTime
package available to mxODBC. Then, you will need to install the
egenix-mx-commercial-2.0.0 package in order to make the mxODBC
package available in your Python installation. For both of these packages,
installation is done by entering the root directory of each package and
issuing the following command:
python setup.py install
You may need to be the
root user for this command to work
successfully.
Follow the instructions in the mxODBC documentation concerning the
installation of the software (along with the mxDateTime
package). In this document, when referring to this version of mxODBC, we
shall refer to the location of these packages as
/home/mx, so
that
/home/mx/ODBC and
/home/mx/DateTime are the
locations of the installed mxODBC and mxDateTime packages respectively.
First, it is important to be aware of the range of database modules available for Python. ODBC configuration can be very time-consuming, and if you can get a "native" module (meaning a module which uses the database system's own API) working using a reasonably convenient compilation, linking and installation procedure then you will probably have saved a considerable amount of time; this is likely to be the case if you have some Python experience but little ODBC experience.
There are, in my experience, two types of database system concerning connectivity issues:
If a database system is supplied with ODBC drivers then it may be worth trying mxODBC even if a "native" module is available. This is because mxODBC provides one of the nicest/closest implementations of the DB-API specification (version 2) that I have seen.
If a database system only provides its own libraries then be prepared to spend a lot of time finding the right drivers, configuring them, and learning how they operate, should you decide to choose ODBC as the connection mechanism. In comparision, the "native" modules are usually straightforward enough to install, although they may require header files not supplied with the database system and this could prevent you from building those modules unless you find such header files in other locations. In such situations you may end up being forced to choose ODBC. However, even if all the necessary resources are available, mxODBC may implement the DB-API specification better than the available "native" module and, in such cases, ODBC is going to be practically unavoidable - not all of the modules for Sybase have, in the past, tended to support parameters in queries/actions, but this feature is very useful for serious work.
An example of missing header files involves Sybase
Adaptive Server Enterprise (ASE) version 11.5 for Solaris 2.6. This
product lacks the
sqlda.h file which is needed by ctsybasemodule. However,
it can be found in certain downloadable packages on Sybase's Web site.
The following database systems are supplied with ODBC drivers:
Sybase Adaptive Server Anywhere (ASA) version 6.0.3 for Linux can be used with mxODBC either with or without the iODBC driver manager. The following instructions describe the process of using the supplied drivers directly.
/home/sybase.
/home/sybase/lib/dbodbc6.sois present. If not, you may need to reinstall ASA.
dbodbc6.solibrary within the
/home/sybase/libdirectory, calling it
libodbc.soor any appropriate library name which will be recognised by your compiler/linker when the flag
-lodbcis used. (This is recommended in the ASA installation instructions.)
egenix-mx-commercial-2.0.0package and edit the
mxCOMMERCIAL.pyfile, making sure that an extension definition is set up. The following patch replaces the default extension definitions with a suitable definition for ASA:
76,77c76,77 < 'mx.ODBC.iODBC', < 'mx.ODBC.unixODBC', --- > #'mx.ODBC.iODBC', > #'mx.ODBC.unixODBC', 83a84 > 'mx.ODBC.SybaseASA', 113,122c114,123 <'] > # ), 124,133c125,134 <'] > # ), 151a153,163 > Extension('mx.ODBC.SybaseASA.mxODBC', > ['mx/ODBC/SybaseASA/mxODBC.c', > 'mx/ODBC/SybaseASA/mxSQLCodes.c' > ], > include_dirs=['mx/ODBC/SybaseASA', > '/home/sybase/include'], > define_macros=[('SybaseAnywhere', None)], > library_dirs=['/home/sybase/lib'], > libraries=['odbc'] > ), > 179,180c191,195 < 'mx/ODBC/iODBC/COPYRIGHT', < 'mx/ODBC/iODBC/LICENSE', --- > #'mx/ODBC/iODBC/COPYRIGHT', > #'mx/ODBC/iODBC/LICENSE', > > #'mx/ODBC/unixODBC/COPYRIGHT', > #'mx/ODBC/unixODBC/LICENSE', 182,183c197,198 < 'mx/ODBC/unixODBC/COPYRIGHT', < 'mx/ODBC/unixODBC/LICENSE', --- > 'mx/ODBC/SybaseASA/COPYRIGHT', > 'mx/ODBC/SybaseASA/LICENSE',
Note that the package is called
SybaseASA, but the
definition required in the compilation process is called
SybaseAnywhere. To apply this patch, save it as
mxCOMMERICAL.py.diff and issue the following command:
patch < mxCOMMERCIAL.py.diff
Answer the question of the file to patch with the filename
mxCOMMERCIAL.py.
dbsrv6or
dbeng6program as you usually would.
LD_LIBRARY_PATHvariable must include the directory
/home/sybase/lib.
ODBC.SybaseASAmodule:
import mx.ODBC.SybaseASA
asademodatabase using the default user details:
c = mx.ODBC.SybaseASA.Connect("asademo", "dba", "sql")
/home/sybase.
/home/sybase/lib/dbodbc6.sois present. If not, you may need to reinstall ASA.
Sybasesubdirectory of the installed mxODBC package:
/home/mx/ODBC/Sybase
Setupfile, defining the following things:
-DHAVE_SQLDriverConnect \ -DASA \ -DODBC_UNIX \ -I/home/sybase/include \ /home/sybase/lib/dbodbc6.so
mxODBC.hfile, adding a special
ASAsection, as the following
diffoutput shows:
206a207,215 > #ifdef ASA > /* Adaptive Server Anywhere driver */ > # include "odbc.h" > # define MXODBC_INTERFACENAME "Adaptive Server Anywhere ODBC" > # ifndef HAVE_SQLDriverConnect > # define HAVE_SQLDriverConnect > # endif > #else > 291a301 > #endif /* ASA */
Sybasesubpackage as instructed on the mxODBC page.
dbsrv6or
dbeng6program as you usually would.
LD_LIBRARY_PATHvariable must include the directory
/home/sybase/lib.
ODBC.Sybasemodule:
import ODBC.Sybase
asademodatabase using the default user details:
c = ODBC.Sybase.Connect("asademo", "dba", "sql")
dbsrv6or
dbeng6.
Traceback (innermost last): File "
", line 1, in ? mxODBC.OperationalError: ('IM003', 0, '[iODBC][Driver Manager]Specified driver could not be loaded', 4265)
The following database systems are provided without ODBC drivers:
Sybase Adaptive Server Enterprise (ASE) version 11.5 for Solaris 2.6 is provided with some libraries which enable client applications to connect to and use the database system. However, these libraries do not directly support ODBC connectivity.
One source of ODBC drivers for ASE is OpenLink Software. They have many products, but the "Data Access Driver Suite (Multi Tier Edition) Version 3.2" product can be persuaded to work. The following instructions describe the process.
/home/sybase.
install.shscript, and then execute that script specifying a suitable location for the installed components. In these instructions we shall refer to this location as
/home/openlink.
/home/openlinkthere will be two files:
openlink.cshand
openlink.sh. These define environment variables which make the usage of the software more convenient. Add the appropriate definitions to your shell's startup file.
/home/openlink/bin/odbc.iniis set up correctly for the database system that you will be using. For example:
[ODBC Data Sources] Badger = Test of the OpenLink Generic ODBC Driver [Badger] Driver = /home/openlink/lib/oplodbc.so.1 Description = Sample OpenLink DSN Host = localhost ServerType = Sybase 11 FetchBufferSize = 99 UserName = Password = Database = ServerOptions = ConnectOptions = Options = ReadOnly = no Trace = 0 TraceFile = /tmp/iodbc.trace [Default] Driver = /home/openlink/lib/oplodbc.so.1
/home/openlink/bin/oplrqb.inifile contains the correct location of the ASE installation, as follows:
[Environment SYBASE11] SYBASE = /home/sybase DSQUERY = Vole
/home/openlink/bin/oplcfgprogram.
/home/openlink/samples/ODBC/odbctestprogram, using the following connection string (which uses the data source
Badgeras defined in the
/home/openlink/bin/odbc.inifile):
DSN=Badger;UID=username;PWD=password
Setupfile in the
iODBCsubdirectory of the mxODBC package:
/home/mx/ODBC/iODBC
Setupfile, exposing the following definitions:
-DiODBC \ -DUSE_PYTHONTYPE_BINDING \ -DPB \ -I/home/iODBC/include/ \ /home/iODBC/lib/libiodbc.so
PBdefinition must be used in the
mxODBC.cfile to prevent some code being executed when the
executemethod of a cursor object is invoked. The following
diffoutput summarises the change:
3053a3054 > #ifndef PB 3055a3057 > #endif
iODBCsubpackage as instructed below.
LD_LIBRARY_PATHvariable must include the directory
/home/iODBC/lib. In addition, the driver directory must also be stated in the
LD_LIBRARY_PATHvariable, so in the above example, this would be
/home/openlink/lib.
ODBC.iODBCmodule:
import ODBC.iODBC
Badgerdatabase using the appropriate user details:
c = ODBC.iODBC.Connect("Badger", "username", "password")
Solid Embedded
Engine version 3.5 for Solaris 2.6, when downloaded for evaluation, is
provided with some ODBC libraries and some demonstration programs which
connect to a database using the ODBC API. However, I could not get mxODBC to
work with these libraries, receiving errors when the
SQLNumParams function was invoked.
However, OpenLink Software is, as
with Sybase ASE, to the rescue with their "Data Access Driver Suite (Multi
Tier Edition) Version 3.2" product. Follow the instructions as you would for
Sybase ASE, substituting "Solid" for any "Sybase" references, and
Sybase-related filenames with the equivalent Solid-related filenames. It does
not seem to be necessary to tell the OpenLink request broker to use any
particular port in order to access a database, at least if that database is
being "exported" on the default TCP/IP port 1313. Presumably, the
/home/openlink/bin/oplrqb.ini file would need to be modified and
the
SOLID environment changed to recognise different addresses
and ports.
Unless you can link mxODBC directly with an ODBC driver, which is the case for some database systems you will need to install the iODBC driver manager. I found that the iODBC Developers Open Source Release V2.50.3 was suitable for Solaris 2.6, but for Linux the "iODBC Driver Manager Runtime Package" and "iODBC Developers Kit" seem to work as well.
Even if you installed the OpenLink components, to build the
ODBC.iODBC module you may still need to find the header files
for iODBC, since they may not be provided with those components. Therefore,
download the appropriate packages noted above and follow these
instructions:
/usr. In these instructions, however, we shall refer to this location as
/home/iODBC.
iODBCsubdirectory of the installed mxODBC package:
/home/mx/ODBC/iODBC
Setupfile, defining the following things:
-DiODBC \ -I/home/iODBC/include/ \ /home/iODBC/lib/libiodbc.so
libiodbc.solibrary referenced at build time and that referenced at run time might affect the operation of mxODBC, if you installed the OpenLink "Data Access Driver Suite (Multi Tier Edition) Version 3.2". You could copy the libraries found in
/home/iODBC/libinto
/home/openlink/lib, making sure that the symbolic links in that directory are adjusted accordingly.
iODBCsubpackage as instructed on the mxODBC page.
/home/openlink/bin/odbc.ini, then set up such a file in the home directory of the user who will be running Python and mxODBC, calling it
.odbc.ini. The following contents are suitable for using Sybase ASA 6.0.3 on Linux with iODBC (rather than by directly linking to the ODBC driver provided):
[asademo] Server = asademo Driver = dbodbc6.so
LD_LIBRARY_PATHvariable must include the directory
/home/iODBC/lib. In addition, the driver directory must also be stated in the
LD_LIBRARY_PATHvariable, so in the above example, this would be
/home/sybase/lib. | http://www.boddie.org.uk/python/mxODBC.html | crawl-002 | en | refinedweb |
at 3d7add7439dab6cdebf9adfa8faa2bf24f1faa87 (tag) tagging 02a112e67533ef38cdb99bd93b46b3232f0f0a44 (commit) tagged by Simon Schubert on Wed Dec 3 04:47:05 2008 +0100 - Log ----------------------------------------------------------------- DragonFly 2.1.1 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (DragonFly) iEYEABECAAYFAkk2AToACgkQr5S+dk6z85plAQCgtI1g8rwywzLIOq1xpSQgTRms WocAoPOzXOY8Dua3Y9rnDeYV5I8w7W05 =EZ8C -----END PGP SIGNATURE----- Aggelos Economopoulos (12): Let's try and start a tradition here. This is book is now a classic. The i386/amd64 abi specifies that the direction flag must be clear do early copyin / delayed copyout for socket options update unix domain socket option retrieval Provide MP_LOCK_HELD() for UP builds as well Update bluetooth ctloutput functions not to use copy{in,out} Don't do copy{in,out} in ctloutput any more More copy{in,out} removal for ctloutput routines Do copy{in,out} early in the {g,s}etsockopt system call paths Fix sockopt syscalls for the Nth time. Set fp to NULL on EINVAL Use cred from componentname, don't access curthread Charlie (1): test Chris Pressey (299): Pre-test spot removal solutions in an inconspicuous area before use. Practice commit: learn how to use 'cvs remove', so that, should I need Fix minor formatting errors in cpdup(1)'s man page. Clean up the code in bin/ to reduce warnings from GCC3. Clean up the code in bin/ to reduce warnings from GCC3. Clean up the code in bin/ to reduce warnings from GCC3. Clean up the code in bin/ to reduce warnings from GCC3. Clean up the code in bin/ to reduce warnings from GCC3. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Correct obvious typo in comment. Four new features and a bugfix. Style(9) cleanup. Style(9) cleanup: remove ``register'' keywords. Correct a typo that was introduced in revision 1.2. Correct misspelling of "orphan" and fix up comment structure. Style(9) cleanup. Update to style(9) guidelines. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup. Style(9) cleanup to src/sys/vfs, stage 1/21: coda. Remove `-ly' and `${LIBY}' from our Makefiles. Linking to liby is not Clarify the purpose of liby: Correct type slippage in previous commit: a u_int was accidentally Style(9) cleanup to src/sys/vfs, stage 2/21: deadfs. Style(9) cleanup. Style(9) cleanup to src/sys/vfs, stage 3/21: fdesc. Style(9) cleanup to src/sys/vfs, stage 4/21: fifofs. Style(9) cleanup. Style(9) cleanup to src/sys/vfs, stage 5/21: ext2fs. Style(9) cleanup to src/sys/vfs, stage 6/21: hpfs. Style(9) cleanup. Style(9) cleanup to src/sys/vfs, stage 7/21: isofs. Style(9) cleanup to src/sys/vfs, stage 8/21: mfs. Make a comment less misleading. rpcbind_enable (portmap) isn't Style(9) cleanup to src/sys/vfs, stage 9/21: msdosfs. Style(9) cleanup to src/sys/vfs, stage 10/21: nfs. Fix code typo in previous commit to this file, thus allowing make(1) Style(9) cleanup to src/sys/vfs, stage 11/21: ntfs. Clean up typos and punctuation in comment. Remove two unused GEOM function prototypes. They were hangovers from Style(9) cleanup to src/sys/vfs, stage 12/21: nullfs. Style(9) cleanup. Split the suggested invocation of fdisk (which is failing for some Style(9) cleanup to src/sys/vfs, stage 13/21: nwfs. Style(9) cleanup. Style(9) cleanup to src/sys/vfs, stage 14/21: portal. Clean up style(9) issues that were missed in previous commit to this Style(9) cleanup. Merge with FreeBSD (RELENG_4) src/sys/dev/syscons/syscons.c, Add missing function prototype. Style(9) cleanup to src/sys/vfs, stage 15/21: procfs. Style(9) cleanup to src/sys/vfs, stage 16/21: smbfs. Style(9) cleanup to src/sys/vfs, stage 17/21: specfs. Merge with FreeBSD (RELENG_4) src/bin/mv/mv.c, revision 1.24.2.6: Merge with FreeBSD (RELENG_4) src/sys/msdosfs/msdosfs_vfsops.c, Style(9) cleanup to src/sys/vfs, stage 17/21: udf. Make a small grammar correction: "...most forms of the disklabel Fix a bug in bsd.port.mk that was causing 'make clean' to fail in some Merge with FreeBSD (RELENG_4) src/usr.sbin/timed/timedc/timedc.c, Three minor fixes: Fix catman(1) so that it recognizes valid manpage names as valid. Style(9) cleanup to src/sys/vfs, stage 19/21: ufs. Style(9) cleanup to src/sys/vfs, stage 20/21: umapfs. Style(9) cleanup to src/sys/vfs, stage 21/21: unionfs. Style(9) cleanup to src/sys/netinet6: Correct sort order of SUBDIRs. Style(9) cleanup to src/usr.sbin: remove remaining `register' keywords. Clarify how our `test -w' is a superset of POSIX' requirements. The default shell for the user `man' is nologin(8). Make the examples Style(9) cleanup: Fix typo. Fix typo in error message - there is no system call named "lchflags." Add -u option to cpdup(1), which causes its -v[vv] logging output to Merge with FreeBSD (RELENG_4) src/usr.bin/script/script.c, revision Fix a bogus type declaration to remove a compiler warning - `group' Merge with FreeBSD (HEAD) src/usr.sbin/kbdmap/*: Merge with FreeBSD (HEAD) src/usr.sbin/adduser/*: Make virecover more robust. Instead of aborting (and causing Perl is no longer user during kernel build, so remove the Perl scripts Correct the FreeBSD attribution on this file. Correct the FreeBSD attribution on this file. Synch to modifications in the installer scripts: Don't enable CAPS unnecessarily. Remove temporary files immediately after they're no longer needed. Change the REQUIRE from `inetd' to the more sensical `mountoptional'. Update the URL from which the installer packages can be obtained. Clarify behaviour of the -s option. In dumpon(8) usage, `off' is a literal keyword, not a parameter name. Style(9) cleanup: Import from NetBSD: `pgrep' and `pkill' utilities for finding and DragonFly-ize pgrep/pkill: Correct an inaccuracy in disklabel(8)'s manual page: the default Update list of installer packages to match installer version 1.1. Hook pkill up to the build. Update installer packages to 1.1.1, fixing a couple of minor bugs: Correct an inaccurate statement. According to my testing, cpdup never Update installer to 1.1.2. Add various improvements to the "pre-flight installer": Style(9): Document the slightly surprising behaviour of 'ifconfig -alias' when Bump WARNS to 6. Bump WARNS to 6: Bump WARNS to 6. Oopsie; use WARNS?=, not WARNS=. Bump WARNS to 6. Bump WARNS to 6: Style(9): Bump WARNS to 6: Bump WARNS to 6: In install_* targets, honour any EXTRA_PACKAGES and EXTRA_ROOTSKELS Improve seperation between kernel code and userland code by requiring Bump WARNS to 6 and apply some style(9): Bump WARNS to 6. Merge with FreeBSD revision 1.16 (sheldonh): Merge with FreeBSD revision 1.18 (jmallett): Raise WARNS to 7 and apply some style(9): Apply some style(9): Merge with FreeBSD revision 1.20 (markm): Make an example follow the recommendation given in the immediately Merge with FreeBSD revision 1.21 (kientzle): Clean up the usage message, both in usage() and in the main program Use real getopt() handling instead of the hand-rolled and IOCCC-worthy Raise WARNS to 6: Style(9): the return type of strcmp() is not a boolean, so don't Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6. There is no such thing as WARNS=7 (yet,) so use WARNS=6. Apply style(9):. Clarify a run-on sentence by splitting it into two. snprintf() and Style(9): remove `register' keywords. Style(9) cleanup: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6. Raise WARNS to 6: Raise WARNS of newfs to 6: Style(9): Clarify/clean up code, make fewer assumptions about types: Raise WARNS to 6: Raise WARNS to 6: Further cleanup: Raise WARNS to 6: Raise WARNS to 6 and generally clean up: Raise WARNS to 6: Raise WARNS to 3: Reduce warnings when compiled under WARNS=6: Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6: mkdep(1) seems to want the current directory on the include path, Fix typo in error message. Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6. Unlike printf(3), fwrite(3) doesn't stop at the first NUL character Use memchr(3) instead of a hand-rolled loop. Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6: Raise WARNS to 6: Clean up: Raise WARNS to 6: Raise WARNS to 6: Partial merge with recent revisions[1] from FreeBSD. Clean up: Partial merge with recent revisions from FreeBSD: Apply less dodgy range checks. Fix up a bug that crept in during the strcpy->strlcpy conversion; Change type of find_compare() so that it doesn't need to be casted Partial merge with FreeBSD: revisions 1.23 through 1.34. Raise WARNS to 6: Merge with FreeBSD, revision 1.27 (fenner): Merge with FreeBSD, revision 1.29 (imp): Merge with FreeBSD, revision 1.30 (markm): Further cleanup: Import (slightly modified) ru.koi8-r.win.kbd:1.1 from FreeBSD (fjoe): Hook ru.koi8-r.win.kbd up to Makefile and INDEX.keymaps. Raise WARNS to 6. Add "exit" command as a synonym for "quit". Make pgrep(1) print a newline at EOL, as expected, instead of at the Style(9): Style(9): In the parent process, close the slave file descriptor at the earliest Raise WARNS to 6: Style(9): Don't have binsrch() alter a global variable if it fails. Instead, Constify filename variable. Raise WARNS to 6: Merge with FreeBSD, revision 1.16 (stefanf): Raise WARNS to 6. Sync with FreeBSD, random.c:1.17 and random.6:1.7-1.8 (ru): Improve data validation: Raise WARNS to 6: Add WARNS?=6 to Makefile. Update installer to version 1.1.4. Highlights include: Remove 'pristine' file tree. It is no longer needed now that there is Update cdrtools package requirement to version 2.01. Add the pfi_backend variable, which can be used to specify which Reduce warnings as part of WARNS=6 cleaning, stage 1/6: Reduce warnings as part of WARNS=6 cleaning, stage 2/6 (or so): Reduce warnings as part of WARNS=6 cleaning, stage 3/6 (or so): Reduce warnings as part of WARNS=6 cleaning, stage 4/6 (or so): Reduce warnings as part of WARNS=6 cleaning, stage 5/6: WARNS=6 cleaning, stage 6/6: Style(9): Style(9): Style(9): Style(9): Style(9): Small step towards WARNS=6: Raise WARNS to 6: Unbreak the pfi_autologin option by fixing a broken invokation of sed - Tack pfi.conf onto the end of rc.conf later (only after pfi.conf If a get_authorized_hosts file exists on the pfi media, copy it into Raise WARNS to 6: When rendering the routename() or netname() of a struct sockaddr When dumping the contents of a struct sockaddr with an unknown address Raise WARNS to 6. De-spaghettify: Followup to previous commit: use uint8_t instead of u_char, for a real We have _DIAGASSERT now; might as well enable it here. Style(9): Style(9): Style(9): Style(9): Raise WARNS to 6: Raise WARNS to 6. Raise WARNS to 6: Add the pfi_curses_escdelay variable. When using the curses frontend, Update installer to version 1.1.5. Highlights include: Update the information regarding the installer and the pfi Punctuation, formatting, grammar, and spelling nits. Back out revision 1.31. The 'release' target alone should be The error message issued when a requested package cannot be found has When installing packages into an ISO-image-to-be file tree: Reduce warnings that appear under higher WARNS levels: Style cleanup: instead of seperate #defines for NULL cast to different Fix grammar (associate -> associated) and start sentence on new line. Make comments about bpf_validate()'s behaviour reflect reality. The Style(9) cleanup: use ANSI format for function definitions. Merge with FreeBSD vary.c:1.16 and date.1:1.68 (yar): Merge with FreeBSD netdate.c:1.17 (charnier): udp/timed -> timed/udp Merge with FreeBSD 1.44 (dds): report and exit on write error. Raise WARNS to 6: Sync with all revisions up to FreeBSD date.1:1.72 (ru), notably: Style(9): Reduce diffs with BSD Installer repo: Introduce a make variable PACKAGE_SITES to improve package fetching in Document non-obvious behaviour of missing 'a' line in fdisk script. Clarify examples by differentiating between literals and arguments. Make 'make upgrade' work from the LiveCD, stage 1/5 or so: Make 'make upgrade' work from the LiveCD, stage 2/5 or so: Make 'make upgrade' work from the LiveCD, stage 3/6 or so: Make 'make upgrade' work from the LiveCD, stage 4/5 or so: Make 'make upgrade' work from the LiveCD, stage 5/5: Two small alterations to 'make upgrade': Chris Turner (2): testing, 123 Add '-l' support to vnconfig(8) and supporting VNGET ioctl to vn(4). Dave Hayes (5): Testing a commit per man page Installer import Installer import into contrib (real import this time) Merge from vendor branch BSDINSTALLER: * Allow nrelease/Makefile to build the local installer in usr.sbin David P. Reese, Jr. (34): Change the split syscall naming convention from syscall1() to kern_syscall() Make the linux emulator use the newly split listen(), getsockname(), Create an emulation/43bsd directory and move the recently modified Separate all of the send{to,msg} and recv{from,msg} syscalls and create Split getsockopt() and setsockopt(). Modify kern_{send,recv}msg() to take struct uio's, not struct msghdr's. Modify linux_{send,recv}msg() and linux_{set,get}sockopt() to use the Introduce the function iovec_copyin() and it's friend iovec_free(). Fix a bug in the last commit where sendfile() would forget to drop a file Implement socket() and shutdown() using the in-kernel syscalls. This Rename do_dup() to kern_dup() and pull in some changes from FreeBSD-CURRENT. I wasn't properly checking for rollover in iovec_copyin(). Remove code Create kern_readv() and kern_writev() and use them to split read(), pread(), Create the kern_fstat() and kern_ftruncate() in-kernel syscalls. makesyscalls.sh wants comments to be on their own line. Move the Checkpoint Remove the FreeBSD 3.x signal code. This includes osendsig(), Split wait4(), setrlimit(), getrlimit(), statfs(), fstatfs(), chdir(), Unbreak svr4 damage from last changes to kern/kern_resource.c. The recent lseek() fix didn't touch olseek(). I don't see why, as olseek() The big syscall split commit broke utimes(), lutimes() and futimes() when Split execve(). This required some interesting changes to the shell Remind myself and others that kern_readlink() isn't properly split yet. Split mkfifo(). Kill two more stackgap allocations. One is in linux_sysctl(). The other Move ogethostname(), osethostname(), ogethostid(), osethostid(), and Move ogetkerninfo() to the compat tree without any changes. Split mmap(). Remove an unused variable from last commit. Sync with FreeBSD-5.x MODULE_DEPEND() and DECLARE_MODULE() broke viapm. Implement linux_truncate64() and linux_ftruncate64(). Implement linux_truncate64() and linux_ftruncate64(). Implement linux_mmap2(). Fix a bug in signal translation introduced in last revision. This effects Fix linux_getrlimit() and linux_old_getrlimit() which weren't copyout()'ing David Rhodus (277): Null terminate the new argv[] array when converting 'tar zcf' to Fix for trap 12 on KVA space exhaustion. Fix CPU stats percentages formatting Try to distinguish PCS encoding error events Fix a problem that occurs when truncating files on NFSv3 mounts: we need Don't map LINUX_POSIX_VDISABLE to _POSIX_VDISABLE and vice versa for Allow the caller to get an error directly if we sent the packet immediately. Fix compile error. LINT breakage fix. The iBCS2 system call translator for statfs(2) did not check the Fix for insufficient range checking of signal numbers. Fix a bug which caused signals on YUV images to fail. Add or correct range checking of signal numbers in system calls and * Fix the type of the NULL arg to execl() Fix a bug that could cause dc(4) to m_freem() an already freed Add function so Control-D erases the screen. regen Allow the optional setting of a user, primary group, or grouplist Update man page to reflect the optional setting of user, primary Cleaned up dead declarations. Sync with FreeBSD M_PREPEND() can fail, meaning the returned mbuf The game rogue(6) has a potential buffer overflow allowing Add USE_SLAB_ALLOCATOR into the config. #include reorganization Add a few more build options. Inital cleanup work to make NETNS compile again Add NETNS to the LINT config file. Do not record expanded size before attempting to reallocate Clean up. Correct address parsing bug that is believed to be remotely exploitable. * de- __P() * de- __P() * de-__P() * Intel ACPI 20030228 distribution with local DragonFly changes. * Finish up last commit. Forgot another one. Looks like we can't have comments on the same * Add #include <sys/buf2.h> * Add the Promise Supertrack to the build Correct a case in readv(2) where the file descriptor reference count Introduce a uiomove_frombuf helper routine that handles computing and * Turn on debugging options by default * Try to make gencat(1) recognize. * Correct several integer underflows/overflows in linprocfs * Fix a typo that was introduced from the last change. * Add this nice filesystem testing tool that I've recently * Move variable 'p' into a more proper place. * Add missing function declaration that was some how * Pull is some security checks from sendmail 8.12.10. * Un-break buildworld * Nuke ~771 __P()'s from our tree. * Fix a that showd up on 64 bit systems. It was actually * Change the buffer lenght test in NEEDSP() so that it does not * Add a comment to the file pertaining to the last commit * Handle realloc() failure correctly. * Call crypt() directly instread of taking a detour through makekey. * Fix BBS buffer overflow in makeargv(). * Fix two buffer overflows caused by off-by-one errors: avoid writing * Do not set the No_CRC bit in the Mode Control Register. * Fix a potential race condition in crfree. * Temporarily back out last change because of a problem reported * Add ACPI to the LINT build. * s/FreeBSD/DragonFly/ So I know what system I'm booting up now. * Fix some __P() mess up. * Fix a locking issue inside the inode hashing code which * How did these get here. * Add quirk for LEXAR 256MB JUMPDRIVE * Prevent leakage of wired pages by setting start_entry * s/FreeBSD/DragonFly at the boot2 prompt. * buildworld doesn't need to look at nrelease. * Attempt to save the last dregs of emacs users' sanity by saving the * don't fclose() a bogus FILE *. * Document the fact that send(2) can return EPIPE (like when a * Allow a return of 0 from __sys_write() to exit the loop in libc_r's * Add support for gb18030 encoding. * Add GBK encoding support. * Fix problem where initgroups would silently truncate groups with * Sync comment with code's reality. * Might help if we built gbk.c and gb18030.c * Remove a few extra $FreeBSD$ tags * Add a part of the AGP update that was missed. * Print out a warning if /dev/null doesn't exist. * Add GB18030 directories. * GBK locale directories * Try and match README guideline better. * Update Sendmail to version 8.12.10 * Sync Changes over from FreeBSD * Add the POSIX.1-2001 header file <cpio.h> * Cleanup compiler warnings and other style cleaning while here. * Add the pst(4) driver to the GENERIC kernel build. * Merge From FreeBSD * Merge From FreeBSD * Implement POSIX (XSI)'s header file re_comp.h * Change __guard back into a multi-character int * Add ending comment to line as to avoid warning messages. * Merge From FreeBSD * Merge From FreeBSD * Remove references to deprecated options. * Welcome the DragonFly project into to new year by bumping * Add in some basic parameter values that we will use for * Merge in changes from FreeBSD with some modification * Merge in the copyright that DragonFly is being distributed under. * Add usb_mem.c back into the build process so the usb module * Don't install the freefall gnats files into /etc as * Remove the HTT option from the kernel as we can now * Remove HTT option from kernel config files. * Update so as to reflect structure name changes to upcall.h * Perform some man page maintenance bringing the miibus related * Use id(1) instead of grep(1) to detect the presence of the smmsp * Add in support for the IBM ServeRAID controller. * Change SUBDIR sort order. * Add in ips.4 to the Makefile so it gets installed * Add in 'device ips' to the LINT kernel config. * Fix libcaps build by syncing up with kernel work. * Drop the directory outline for building gcc3. * Starting using GCCVER when needed and other things * Fixup gcc3 build process to use correct install path. * Fix man page names. * Remove extra compile flag to clean up redefine warnings. * DragonFly<-FreeBSD name change in boot loader code. * Change FreeBSD ifdefs to DragonFly * Change FreeBSD ifdef over to DragonFly * DragonFly<-FreeBSD Name clean ups * Add kern_systimer.c to the kernel build process. * Sync pages with FreeBSD 4.x nectar 2004/02/05 10:00:35 PST * Update function defines to match up with the work from * Remove ufs_disksubr.c from kernel build files. * Return if array is out of bounds in icu_setup function. * Fix placement of return value. * Remove instances of PRIBIO because DragonFly no longer * Correct end value calculation. This should fix the numerous problems * Add in support for the Silicon Image SATA controller. Start removing the old build infrastructure for the a.out Hook in UDF to the build process. Should have been on last commit. Change which(1) over from a perl script to a C program. Change whereis(1) over from a perl script to a C program. Next time I'll run cvs update to make sure I've added Attach mount_udf to the buildworld process now. Update rc.d scripts to use the correct path for the named pidfile. Change this vnode check inside of the VFS_BIO_DEBUG * Fix an off-by-one problem. * Change the offset alignment in vn_rdwe_inchunks() Make sure the ELF header size is not too large. This fixes a potential over Move vm_fault_quick() out from the machine specific location The existing hash algorithm in bufhash() does not distribute entries Update tag creation paths to reflect the cvs Add back in basic ACPI support into the LINT kernel build. Place 'device acpica5' into the TINDERBOX kernel config so Add in the twa(4) driver. This adds in support for the 3Ware Add in kernel config file options that were Define that Dragonfly HAS isblank() as to fix the Fix an symlink ordering issue that was causing the install Fix symlink ordering issue with the less man page. Count statistics for exec calls. Add in the new acpica5 to the device build path. Remove the VREF() macro and uses of it. General update: Cosmetic changes. Leave out acpica5 from the device build path for a little longer, well Cosmetic changes. Point the remote site to use gobsd.com so the remote option now works. Cleanup pass. Removed code that is not needed anymore. Remove unneeded typecast. oops. Add the SysKonnect ethernet driver to the GENERIC kernel config. Default the MMX/XMM kernel optimizations to the on position. Also make a Fix installworld by making it aware of the rconfig examples. Create twed0 entries in /dev Drop to the debugger if unable to sync out the buffers during shutdown. Unhook gcc-3.3 from the buildworld process. This also removes the gcc3 Use correct copy-write notation for The DragonFly project copy-writes. Sync in Copyright notice change from FreeBSD. Update some copyright notices to become more legal compliant. Fix compile warning. Remove unused variable. Merge in FreeBSD-SA-04:13.linux Fix warning when building with c99. Fix memory leak. Spelling. Fix a typo in last commit as to fix the EHIC compile. Bump the rev to 1.1-CURRENT. Implement advisory locking support for the cd9660 filesystem. Perform some basic cleanups. Change some types over to C99 standard Remove __FreeBSD__ defines which were not needed and caused compile Cleanout __FreeBSD__ defines which we don't need. Cleanout __FreeBSD__ defines which are not needed. DragonFly MP support, not fbsd anymore. Emulate __FreeBSD__ till 3rd party applications later add in DragonFly POSIX.1-2003: Changing the group ID is permitted to a process with an Correct line wrap. Correct getpriority() and setpriority() function definitions to We will need more resource-limiting compliance work done before these Correct path location search when pulling packages from GoBSD. Document that there seems to be a problem with the syncer not running Remove unused variable. Plug in missing brelse calls as to fix a bug in the FFS reload code. Check error return value when creating the IPC service. Minor cleanups. Make a type usage correction. Remove unneeded cast when Add a few notes. Spelling. Clearly I need to wakeup all the way before starting to make changes. Fix a problem introduced by the last commit. Change the array of char to an array of struct pollfd to avoid an Avoid leaving an obsolete pointer while the interface is detaching yar 2004-08-28 12:49:58 UTC Add in support for the iRiver iFP MP3 player. add in support Frontier Labs NEX IA+ Digital Audio Player with USB CF Bring in fix from gzip 1.3.3 to avoid crashes when processing certain do not send icmp response if the original packet is encrypted. Fix typo. Fix printf example. I guess I should bumb document date too. Print link level address on vlan interfaces using ether_ntoa(), to make Fix a problem with the NSIG check which could lead to an exploitable Fix another incorrect usage of NSIG. Add support saving both user/group permission on symlinks(-p) Remove register keyword usage. Check that the authpf user and group are installed before starting the Since there isn't an authpf user we'll have to grep for the authpf Declare some of the local variables static. Update the 3ware card comment to better reflect which cards are supported. Sync up items from the GENERIC config and a few more items to make Spelling. Initialize 'version' variable to silence compiler warning. Quiet tinderbox: Spelling: Filesystem is one word. Handle all types of interrupts when operating the uhci(4) controller in Remove the old system perl binary when performing 'make upgrade'. ANSI and whitespace cleanup. No operational changes. Fix typo. re-fix typo. Maybe its time to start drinking coffee in the morning. :-) Add information about return code for immutable files. Add information about return code for immutable files. Remove call to readdir() that was missed durning the libc namespace Connect our 1:1 threading library to the build. Compensate for off by one bugs in disk firmware for 48BIT addressing cutover. no need to have break; after return; Fix some signed/unsigned comparisons. Clean up main() arguments. Unbreak the pkg_add -r which the last commit broke. Testing a new commits monitoring script. test Correct a pair of buffer overflows in the telnet(1) command: Merge from vendor branch HEIMDAL: Correct a pair of buffer overflows in the telnet(1) command: Remove some uses of the SCARG macro. Remove the telnet program from the heimdal source as it contains Remove the telnet program from the heimdal source as it contains Merge from vendor branch HEIMDAL: Add in a 'livecd' taget for those who don't want to include the Clean up some register keyword usage. Remove some uses of register keywords. Fix kernel build issue. Missing function argument. Replace spl with critical sections. Remove reference to devd. Fix typo. Remove scheduler define which was never used. Make print statements more verbose. ANSI-fy some functions, remove some duplicate CVS tags and add in some Remove some register keywords and add in some missing $DragonFly$ tags. The packages have been moved to the fireflybsd server now as it is a scm test. Change the MAX_BUTTONS count to 31. This allows us to recognize more than Add -P to the cvs example commands. Merge in security fix from FreeBSD. Fix typeo. Add missing code needed for the detection of IPSec packet replays. NetBSD merge. NetBSD merge. Correctly identify the user running opiepasswd(1) when the login name Properly copy in userland scheduler name via copyinstr. Mark a few more system calls MPSAFE. Merge from FreeBSD. Hold MP lock for getppid(). As noted by Dillon getppid() is not MP safe. David Xu (96): My first commit. Import initial version of 1:1 pthread library. Implement sigtimedwait and sigwaitinfo syscalls. Implement sigwait. generate __sys_sigtimedwait and __sys_sigwaitinfo to allow Implement cancellation points for sigwait, sigtimedwait and sigwaitinfo. Regen. Fix timeout verification bug. Matt Dillon asked to check tv_sec, Remove debug code. Remove unused macro. Set initial thread's stack to 2M on 32 bits platform, 1M for default stack. Use new kernel tls interface. With new tls interface, sysarch.h is no longer needed. Code cleanup, remove unneeded includes. Remove alpha support. Style fixes. Add barrier prototypes. Enable barrier support. In kern_sigtimedwait, do not mask SIGKILL because a stopped process Merge from FreeBSD: Add more relocation types. Import tls support code for static binary from FreeBSD. Add tls functions prototype. Implement _set_tp which is used to set TLS pointer into Oops, fix license format(copy and paste problem). Compile tls support code. Compile tls.c 1. use __weak_reference to define weak symbol. Initialize tls support code for static binary. Add clockid to _thr_umtx_wait, so that clockid attribute of condition Fix comments. unconstify a modifiable parameter. Add prototypes for pthread_condattr_getclock and pthread_condattr_setclock. Add following prototypes: Fix incorrect comment. Add types prgregset_t and psaddr_t which will be used by Add support for TLS. Use rtld's TLS interface to allocate tcb. Don't restart a timeout wait, one can periodically send signal to Pass exact number of threads to thr_umtx_wake. tcb is now managed by rtld, caching it is not correct, Remove unused strong reference. Fix brokeness by using _pthread_xxx functions instead. Backout revision 1.5, the pthread->error was used as internal Fix comment of error member, now errno is TLS based. Define type lwpid_t, it will be used for thread debugger code. Add prototypes of proc service API, every debugger must provide the Add libthread_db, a thread debugging support library. 1. Fix symbols needed by libthread_db. Remove unused function. Sort names. Add locking for FILE. Eliminate PS_FILE_WAIT state, instead use FILE locking code in libc. o Fix obsolete comment. Override _flockfile_debug. I am lost. if thread is blocked on mutex, _thread_kern_sched_state() Fix off-by-one error. Replace THR_FLAGS_SUSPENDED with THR_FLAGS_NEED_SUSPEND, this Introduce pthread_timedjoin_np(). Introduce pthread_timedjoin_np. put pthread_timedjoin_np in right order. Add pthread_mutexattr_setpshared and pthread_mutexattr_getpshared. Export following functions: Make usleep as a weak symbol, so thread library can override it. Make usleep as a cancellation point. Remove a redundant tls_get_curthread() call. Remove unneeded tls_get_curthread() call. Clear return code to zero if joiner sucessfully waited joinee. MFBSD: If we got a SIGKILL signal in kern_sigtimedwait, call sigexit to Close a race for thread detach. copy flag DETACHED. Rewrite mutex_init, get rid of compile warnings. Oops, disable debuging code. Make thread suspension really work. Revamp the algorithm to acquire a contested lock, the intention Update UMTX_LOCKED and add UMTX_CONTESTED macro. Convert weak reference to strong reference so that static library Pull in all symbols needed for static binary. Unbreak buildworld. Move some global variables into its module, remove priority mutex code Add function prototypes: pthread_atfork, pthread_attr_getguardsize, Add all pthread functions into libc namespace. WARNS level 4 cleanup. s/long/int Unlock recursive mutex in pthread_cond_wait, though this is arguable. Tweak source code a bit to make gcc to generate better code. Add compiler branch prediction hint macros, obtained from FreeBSD. Use the branch prediction macros in sys/cdefs.h. 1) Use int for m_flags of pthread_mutexattr. namespace cleanup. Seperate _mutex_cv_unlock from _mutex_unlock_common. Allow userland to bind a process to specific CPUs. The initial Oops, the usched_set syscall prototype should be updated. Regenerate. Use type lwpid_t for lwp_tid. Eirik Nygaard (204): Add some missing $DragonFly$ keywords. Remove __P macros from src/usr.bin and src/usr.sbin. Remove some __P macros in sbin that my script missed. Fix typo in cloud -> in a cloud. Remove send-pr from the build. Make truss check if procfs is mounted, and where it is mounted. Reattact send-pr to the build and change the mail address to bugs@crater.dragonflybsd.org Nuke some more __P macros. * Add a missing $DragonFly$ keyword Remove the rest of the __P macros in src/usr.sbin * Parse arguments with getopt * then -> they Add missing $DragonFly$ keywords * Remove some __P macros from gnu/usr.bin/ * __P removal from games/ * Get rid of of the _BSD_XXX type usage. * Add missing $DragonFly$ keywords. * Removed the __P macros from lib/ * K&R function cleanup * K&R function cleanup * K&R function cleanup * K&R function cleanup * K&R function cleanup * Added tabs instead of spaces for readability and consistency with NetBSD * Add DragonFly as an possibility to a case statement. Fix a bug which causes wrong filename being written into the syslog * Make pkg_add fetch the packages from /packages-4-stable instead of * K&R function cleanup * Remove unused #includes. * Merge fixes from libc to libcr. * Indentation cleanup * Let mergemaster look for make.conf in the correct path. * Fix typo: becuase -> because * K&R function removal * Remove the use of the kern.devname sysctl since it does not exist. * Remove the historical r in front of the old raw character devices in /dev * Default to a calendar file that actually exists (calendar.all) * Add devname_r(3) function, which takes a buffer and buffer length as * Nuke calendar.freebsd We don't have the ACPI module, so don't try to load it at boot. Last commit was completely wrong. Reverting to old revision of file. Catch up with recent libc updates. His name is Fred. Let the man page know about varsym links. Remove genassym and gensetdefs from the cross build as well. Readd ending '\' so it compiles the cross build tools Remove stale text in front of the DragonFly and FreeBSD keywords Add missing */ on the end of a comment. Replace K&R style functions with ANSI C style functions. Fix comment to show the munlockall function, and not the mlockall one. Remove unused settings Update FILES section Use MALLOC_DEFINE. style(9) cleanup: style(9) cleanup: style(9) cleanup: Make the comment a bit clearer. this -> the fix in comment. Move the ASSERT_VOP_LOCKED and ASSERT_VOP_UNLOCKED macros into its own Remove perl from the i386 kernel build. Update libcr with recent libc updates. * Remove ``register'' keywords. De-perlify. Add lock.9 man page, it is also symlinked to: lockcount.9, lockinit.9, We are DragonFly not FreeBSD, so rename the name in GENERIC, and remove the Tell awk where the arguments for the program ends and where the arguments for Remove gawk from the build and let one-true-awk be our default awk from now Fix a core dump caused by a .DEFAULT target with no commands. Merge from vendor branch DIFFUTILS: Import of diffutils 2.8.1 Remove vim swap file, which should not have been imported. Update diff, diff3 and sdiff to use diffutils 2.8.1 Merge from vendor branch AWK: Import of awk 20040207 Update to awk 20040207. Merge from vendor branch LESS: Import of less 381 Update less to version 381. Merge from vendor branch LIBPCAP: Import of libpcap 0.8.3 Merge from vendor branch TCPDUMP: Import of tcpdump 3.8.3 Update libpcap to version 0.8.3 Update tcpdump to version 3.8.3 Remove two mis-added .orig files. Remove unused error variable. Change mbug allocation flags from M_ to MB_ to avoid confusion with malloc Add ref counting to the checkpoint handler and unload function so an unload Split the __getcwd syscall into a kernel and an userland part, so it can be Print the entire path to where the checkpoint file is being created to avoid Use MAXPATHLEN as the size of the checkpoint filename sysctl variable instead Swap order of first and second argument in bcopy, forgot this when changing Update tcpslice to version 3.8. This is less, correct name in upgrade file. Make sure the xe driver found a supported card type, if it didn't then bail Some laptops return other values for working toucpads. Allow test_aux_port to -{h,k} are mutually exclisive. So only pay attention to the last of the two Speed up hardlink detection by using a self-sizing hash table rather than the Change machien to machine. Rearrange the machine/cpufunc.h header and add it where needed to make libcaps Play catchup with libc. Fix formatting error. The syscall number lives in ms_cmd.cm_op now. * Remove one of the two err.h include statements. Forced commit to note that this fixes was submitted by: "Liam J. Foy" Fix build with gcc34. Remove a not needed main() definition. style(9) cleanup: style(9) cleanup. Add missing va_end(ap); Don't depend on pollution in <limits.h> for the definition of <stdint.h> macros. Add WARNS?=6 and remove CFLAGS=. Remove local main() definition. Fix gcc 3.4 build. Fix gcc 3.4 build. Fix gcc 3.4 build. Since we are DragonFly we want to use the DragonFly version instead of the Since we are DragonFly we want to use the DragonFly version instead of the Fix gcc 3.4 build. Add message passed syscall's. Use strchr instead of index, and strrchr instead of rindex because the str*chr Use strchr instead of index, and strrchr instead of rindex because the str*chr Redo the sysmsg test programs to match the changes done in the kernel to sendsys.h is no longer needed. Move the simple pager from db_ps.c into the more obvious file db_output.c and Add noops for vga_pxlmouse_direct and vga_pxlmouse_planar in the case where Add KTR, a facility that logs kernel events to help debugging. You can access Add two more header in the !_KERNEL case so libcaps knows about * Fix spelling. * static functions * pid should be pid_t not int style(9) cleanup. Update the sysperf tools to match the current sysmsg functions. #endif does not take an argument, comment it out. Consify some variables. style(9) Remove not needed void casts. Constify VAR. Remove xpt_release_simq_timeout(), this function has not been in use since the Back out constification. This caused a SIGBUS becaus a functon wanted to write WARNS= 6 cleanup. Fix traceroute. Move away from GNU traceroute and use the BSD licensed one. Make the code WARNS= 6 clean. It is sys_checkpoint.2 not checkpoint.2 Fix type. It is pkgs and not pksgs. Add directory /usr/include/kadm5. Merge from vendor branch HEIMDAL: Add heimdal-0.6.3 Update the kerberos5 build framework to work with heimdal 0.6.3. Kerberos no longer comes with it's own telnet, so always build the one in Kerberos no longer comes with it's own telnet. Always build the one in Whitespace cleanup. Move libutil up above KERBEROS5 libs and remove the duplicate entry when No longer used. Fix kerberos5 build by adding some headers files. These should be generated on style(9) Allow numbers to be used after a letter in the modifier section after a Add per cpu buffer for storing KTR information. I have added a per cpu buffer for ktr, so this note can be scraped. Merge from vendor branch NCURSES: Add ncurses 5.4 source code. Be consistent in the use of BOOLEAN. Remove Ada95 files and man files. Merge from vendor branch NCURSES: Remove Ada95 files and man files. Update ncurses to version 5.4. There is no need to set *entry on each entry traversed in the red-black tree Fix nested comments. Vim added another */ while I copy/pasted some of the lines A few more instances of leaf that should be leap. getopt is WARNS 6 clean. Remove *spl() from dev/disk/{advansys,aha,ahb,aic7xxx,amd} replacing them with Remove *spl() from netinet6 replacing them with critical sections. * Include string.h to get strlen() prototype. More spl_* removal from dev/disk/, replacing them with critical sections. Replace the hand rolled linked list with a SLIST. We are in callout_stop, not callout_reset. Tell that to the world. Commit untouched SCTP files from KAME originally written by Randall Stewart. Add DragonFly to the #ifdef mess. Our mbuf allocation flags are prefixed with MB_ not M_. We have to declare the parent node for a sysctl. Add va_arg handling for DragonFly. Remove the second argument to ip_stripoptions(), it was never used. Properly handle mbuf copying on DragonFly when we try to reduce the size of a Initialize a few more timers. Add a forth argument to soreserve(). Pass just a NULL for now. Add libsctp. Convert spl* to critical sections. Remove forgotten debug printf. Call suser() with the correct number of arguments. Don't return right after a goto. Tie SCTP into the kernel, this includes adding a new syscall (sctp_peeloff). Hook libsctp into the build. Update manpages to reflect the changes joerg did that added const to some WARNS?= 6 and style(9) cleanup. Remove libraries libtermcap and libcompat from the linking. Make wakeup, tsleep and friends MP-safe. Back out last change since Matt has issues will it. Will reimplement it using Use errx(), no errno is set in this case. SUSv3 states that the type builtin should return a value > 0 if the completion Make SCTP compile when IPSEC is enabled. Add SCTP to LINT. Remove syscall-args. It is not needed now that libcr has been removed. Add locale(1). Gregory Neil Shapiro (28): Test CVS commit. Merge from vendor branch SENDMAIL: Import sendmail 8.13.4 into a new contrib directory as the first step Bring DragonFly's sendmail infrastructure up to date. This commit includes: Remove installation of the dragonfly{,.submit}.cf files since users may Make links for hoststat(8) and purgestat(8) man pages. Merge from vendor branch SENDMAIL: Import sendmail 8.13.6 Slight cleanup on the DragonFly README Adjust build infrastructure for sendmail 8.13.6 Merge from vendor branch SENDMAIL: Import sendmail 8.13.7 Add README.DRAGONFLY for sendmail 8.13.7 Hook sendmail 8.13.7 into the build Make the patch apply cleanly with sendmail 8.13.7's source Merge from vendor branch SENDMAIL: Import sendmail 8.13.8 Add DragonFly instructions file to new version directory Upgrade to sendmail 8.13.8 Merge from vendor branch SENDMAIL: Import sendmail 8.14.1 Add DragonFly instructions file to new version directory. Change build infrastructure over to sendmail 8.14.1. Merge from vendor branch SENDMAIL: Bring in sendmail.org code from the future 8.14.2 release which restores Import sendmail 8.14.2 Merge from vendor branch SENDMAIL: sendmail 8.14.2 has been imported Hasso Tepper (287): Testcommit. Although defined in sys/time.h we don't have CLOCK_VIRTUAL and CLOCK_PROF Bring kernel threads back into top. Revert intial IPv6 routing header type 0 processing fix. More agressive fix for IPv6 routing header type 0 issue. My round of spelling corrections in /share/man. ICMP extensions for MPLS support for traceroute(8). Although our linker supports pie, our elf loader doesn't. Update ping(8) code and manpage to the newest ones from FreeBSD. Add implementations of the inet6_opt* and inet6_rth* functions (RFC3542). Add ip6 and icmp6 displays to systat(1). Fix rewrite error which appeared in rev 1.3. Bring in manpages from RELENG_6. Manpages related to sound imported in Bring in the latest sound changes from RELENG_6. Add mpls-in-ip. Bring in some fixes from IANA and FreeBSD in progress. Nuke "is is" stammering. Clean up sys/bus/usb/usb_port.h. Remove not used/dead/old code. - Fix headphone jack sensing support for Olivetti Olibook 610-430 XPSE. malloc -> kmalloc One callout_stop() is enough. Nuke USBDEV(). Nuke the code specific to NetBSD/OpenBSD/FreeBSD at first. I doubt anyone Use kernel functions. I don't understand how I could miss these ... Nuke device_ptr_t, USBBASEDEVICE, USBDEVNAME(), USBDEVUNIT(), USBGETSOFTC(), Remove duplicate. Nuke SIMPLEQ_* and logprintf. Nuke usb_ callout macros. Fix KASSERT messages. Nuke PROC_(UN)LOCK, usb_callout_t, usb_kthread_create* and uio_procp. Nuke USB_MATCH*, USB_ATTACH* and USB_DETACH* macros. Nuke USB_GET_SC and USB_GET_SC_OPEN macros. There is no need to have ETHER_ALIGN define here. Nuke USB_DECLARE_DRIVER and USB_DECLARE_DRIVER_INIT macros. Nuke the code specific to other BSDs. Nuke USB_DO_ATTACH and remove device_t dv, since it is no longer needed. Fix typo. Remove last usb_port.h defines usages from the tree - selwakeuppri(), Fix stupid mistake. Sorry. Reduce diff with FreeBSD where it makes sense - add a lot of vendors and Regenerate usbdevs.h and usbdevs_data.h and fix affected drivers to use new Better chips distinguishing code for uplcom(4). Add more and fix some IDs, all related to uplcom(4). Add support for many new devices into uplcom(4). IDs are obtained from Minimal (relatively) patch to make my Nokia 9300 smartphone which uses Some trivial fixes obtained from NetBSD: Add references to the uftdi(4), umct(4), and umodem(4). ICMP Extensions for MPLS is porposed standard now - RFC4950. Fix driver_t. Fix warning. Reomve unnecessary sys/vnode.h include. ttyclose() increments t_gen. Remove redundant increments from drivers. There is no need to explicitly call ttwakeup() and ttwwakeup() after Magic Control Technology (MCT) USB to serial converters are not handled by Add devices based on Silicon Laboratories USB-UART bridge. Add uslcom(4) driver which provides support for USB devices based on Add uslcom(4) into LINT. uslcom(4) works with devices based on CP2103 chip. Tested by me with CP2103 Add some new uslcom(4) devices found in Linux driver. Add some new uslcom(4) device ids found in Linux driver. - Correct SYNOPSIS section in USB serial manpages. Add uark(4) driver which supports Arkmicro Technologies ARK3116 chip found Hardware flow control support for uslcom(4). Add the ID of USB serial interface used in HandyTech Braille displays. Add support for HandyTech's Braille displays into ubsa(4) (ID found in Remove reference to the nonexistant uhub(4). Add some devices based on Qualcomm HSDPA chips. Add umsm(4) driver for EVDO and UMTS modems with Qualcomm MSM chipsets. Make functions static. Fix setting 115200 baudrate. Add Smart Technologies USB to serial adapter. Add Smart Technologies USB to serial adapter. Use device_printf() where it makes sense. There is no reason to be so verbose. Bring in latest uftdi(4) driver from FreeBSD. Add usbdi(9) manpage. Nuke ARCnet support. Nuke token ring support. This also means one blob less in DragonFly. Nuke FDDI support. Nuke ARCnet, Token Ring and FDDI related headers and manpages during upgrade. Remove remainings of the oltr(4). Nuke fla(4). It's known to be buggy, supports very limited set of obsolete Remove fla(4) manpage during upgrade. Nuke nv(4), we have nfe(4) which replaces it. Missed this file in previous commit. Simplify the way how chip type is determined. Instead of managing insane Handle baudrate requests algorithmically with newer chips (not old SIO), Better setpgid(2) documentation. Add bus_alloc_resources() and bus_release_resources() functions to allow to Update the agp(4) code to the latest one from FreeBSD HEAD. This brings in Update the agp(4) manpage. General description and example based on NetBSD Oops, remove comma. uftdi(4) related usbdevs work: Add many devices to the uftdi(4). Sources of the info are mainly Linux and Add info about Vitesse Semiconductor Corporation VSC8601 PHY and Realtek Add support for Vitesse VSC8601 and Realtek 8211B PHYs. Patches are obtained Remove duplicates. Unconstify members of the lconv structure to make it conform to the C89 and Fix typo. Remove terminating semicolons from SYSCTL_ADD_* macros. This will allow to Add support for newer ICH SMBus controllers. Also corrected ICH4 entry in Fix warning. Hardware sensors framework originally developed in OpenBSD and ported to Coretemp(4) driver for Intel Core on-die digital thermal sensor with patch lm(4) and it(4) drivers for hardware sensors used in many motherboards. Ported Fix synopsis (reminded by Constantine A. Murenin) and history. Dragonfly always passes a flag for every IO operation depending whether Simplify the code a lot - don't try to be too clever and handle chips with According to RFC2711 routers shouldn't treat all packets with a Router If answer to the repeated probe (same TTL as in the previous probe) packet Update named.root to the version from 1 November 2007 from. Sync with FreeBSD - add OpenBSD 4.2. Kill usage of USB_VENDOR_FOO and USB_PRODUCT_BAR defines mostly using two Nuke usbdevs and references to it. Kill devinfo handling in drivers, set device description in one place - - Add support for 230400 baud rate. Add uticom(4) driver for Texas Instruments TUSB3410 USB to serial chips Add moscom(4) - the driver for MosChip Semiconductor MCS7703 USB to Add uchcom(4) - the driver for WinChipHead CH341/CH340 chips. Add missing USB to serial drivers. Update the uftdi(4) manpage. Fix LINT build. Remove 386 CPU support from the runtime linker. -x was removed long time ago. Fix typo. Add SATA ATAPI support for AHCI controllers. Fix the fix. Fix typos. Allow for any baud rate within a range rather than having a fixed list of Add support for Intel 7221's and 845M GMCH controllers. Fix no-sound issues with ASUS A9T notebook. - Merge input/microphone support for ASUS A8N-VMCSM series. - Add codec id for Realtek ALC268. - malloc M_NOWAIT -> M_WAITOK. - Gigabyte G33-S2H fixup, due to the present of multiple competing Enable headphone jack-sense for HP nx6100 with AD1981B AC'97 codec, Remap and virtualize mixer controls for HP nx6110 with AD1981B AC97 codec, Add support for trimmed down version of ATI SB600 AC97 audio Limit total playback channels to just 1, for ALi M5451. The reliability * Fix support for followings: - Add missing MCP65 id which was accidentally removed in previous commit. Some trivial changes from FreeBSD that allow to use kgdb on /dev/fwmem0.0. Hifn 7955/7956 support to the hifn(4) driver. - Add a '-k' option which does not remove input file, like bzip2(1) do. Add ID for ICH8M in compatibility mode. This makes Thinkpad X61s report Remove references to drivers which don't exist in DragonFly. Don't show in netstat(1) output without the -a switch TCP socket in LISTEN Kernel part of bluetooth stack ported by Dmitry Komissaroff. Very much work Pass all ATAPI commands through. Fixes detecting capabilities of DVD When attached to a high-speed device, report a more appropriate Make NO_GETMAXLUN quirk really do something useful. Add missing ';'. Pay attention to the timeout value passed down by the upper layer. This Add bluetooth userspace libraries - bluetooth(3) and sdp(3). Add /etc/bluetooth/ with common files. Add bluetooth(3) and sdp(3) libraries. Adjust indenting in progress. Add btconfig(8) - the utility used to configure Bluetooth devices. Add sdpd(8) (Bluetooth Service Discovery Protocol daemon) and sdpquery(1) Fix id of the 945GME chip. Add few more usb devices. 0-5 are used in any modern machine and user might Don't supress attach messages from devices other than first one while - Fix compiling umsm(4) with UMSM_DEBUG Add support for EVDO/UMTS card found in X61s Thinkpads. Make sure we really do only the software part if we're dying. Fixes panic Huawei UMTS/HSDPA adapetrs are already handled by umsm(4). Increase size of the umsm(4) buffers and tty(4) input buffer to allow high Defaults for btconfig and sdpd rc.d scripts. Speed up uhub attachment considerably. Rather than powering up each port Import libevent-1.3e. Merge from vendor branch LIBEVENT: Add READMEs. Build libevent. Add bthcid(8) - Bluetooth Link Key/PIN Code Manager and btpin(1) Bluetooth Add rfcomm_sppd(1) - RFCOMM Serial Port Profile daemon. Document bthcid(8) related variables in rc.conf(5). Nuke wicontrol(8). More wicontrol(8) removal. Nuke the ntpd(8). Short manpage for ubt(4). Most of it is commented out for now because sco Implement net.bluetooth sysctls. Add bluetooth(4) manpage. Implement SCO related sysctls. SCO sysctls are implemented now in ubt(4). Mobe btpin(1), rfcomm_sppd(1) and sdpquery(1) where they really should be - Remove btpin(1), rfcomm_sppd(1) and sdpquery(1) from /usr/sbin. umsm(4) -> ugensa(4) as it makes much more sense - there is nothing Qualcomm Update to the version 2008020400 which adds IPv6 addresses for six root - Install bthcid.conf. Add some Sierra Wireless devices found in Linux sierra driver to ugensa(4). Add more device id's to the ugensa(4) taken mostly from option USB serial Add _SC_NPROCESSORS_CONF and _SC_NPROCESSORS_ONLN variables to the Fix pf and ipfilter module loading checks. Fix typo. Remove #ifndef __cplusplus around wchar related stuff in include/wchar.h Fix buffer overflow in ppp command prompt parsing (OpenBSD errata 2008-009). Make sure lo0 is brought up before any other interfaces to avoid problems 10Base-TX -> 10Base-T and 1000Base-TX -> 1000Base-T. Although 1000Base-TX Regenerate miidevs.h. Protect macros with "do { } while(0)" where needed. Sync Bluetooth stack with NetBSD. Decrease the number of reported stray interrupts from 100 to 10. Problems DRM update to git snapshot from 2008-01-04. Add double_t and float_t typedefs for both i386 and amd64 as required by C99. Fix ifdefs to make it possible to use time.h in standards compilant code. Sync Citrus iconv support with NetBSD. Add libc support for gcc stack protector. Compatibility with gcc34 propolice Add OMNIKEY CardMan 4040 smartcard reader. Regenerate. Add a driver for Omnikey CardMan 4040 smartcard reader - cmx(4). Add support for cmx(4) devices. Add useconds_t and suseconds_t used for time in microseconds. Assorted fixes to ugen(4) from FreeBSD. Remove udbp(4) form tree. It was never connected to the build and supports Remove fortran from base. Make use of interrupt endpoint to increase responsiveness. Update the traceroute(8) to the newest code from FreeBSD HEAD. Besides many Merge error fix. Add support for for the AI_NUMERICSERV getaddrinfo(3) flag. While pulling Add pthread_atfork() implementation to libc_r. libthread_xu has it already, Unbreak buildworld. According to SUSv3 including just regex.h must be enough. Fixes build of Remove superfluous recursive lock. This little change makes possible (safe) Remove the code which disables port status change interrupts for 1 second Make sure host controller interrupts are not enabled until all Fix probable cut-n-paste error. Link libarchive against libbz2 and libz to be compatible with upstream. Move timeval struct into its own header and include it from headers where Change suseconds_t to long as it is in most of systems. Fixes a lot of Add objc to the gcc-4.1.2. Merge from vendor branch GCC: Build objc support. We are using 4.1.2 really. Fix [gs]etsockopt(IP_HDRINCL) which allows mere mortals like me to obtain Detach correctly so there is no need to panic during reattach. Cleanup err/error mess in the uticom_download_fw(). Use IPv6 documentation prefix (RFC3849) instead of 6bone prefixes in Fix interrupt pipe processing to treat a ugensa(4) interrupt message Remove some useless variables and assignments from USB code. Fix some NULL pointer dereferences, most of the in debug code though. Make Huawei E220 change the mode from device with single umass interface to Some agp(4) fixes: Add some methods to ACPI to handle embedded controllers and device matching. Move acpi_toshiba.c, it's not pc32 specific. Add ACPI support module for IBM/Lenovo Thinkpad laptops. Work in progress, Add STAILQ_FOREACH_MUTABLE. Add acpi_video(4) - a driver for ACPI video extensions. Document STAILQ_FOREACH_MUTABLE macro. Add acpi_video(4) manpage and move acpi_toshiba(4) manpage out from man4.i386. acpi_thinkad(4) manpage. Unbreak build. Remove dhcp-3.0 from base and import dhclient from OpenBSD. Porting work Upgrade pieces for new dhclient and dhcpd/dhcrelay removal. Remove dhcpd and dhcrelay remainings. Make BOOTP server in installer work with dhcp server from pkgsrc. Handle (unit == -1) case in the device_find_child() function as already used Fix coretemp(4) to provide temperatures from all cores (instead of reading Make pkgsrc/wip cvs checkout use -P. The result of the "RFC3542 support" SoC project by Dashu Huang. Bring in newer ping6(8) from KAME via FreeBSD lying in my disk quite some Forgot this file in the "RFC3542 support" SoC project commit. acpi_cpu(4) update. It's now possible to use higher (lower power usage) C Quite minimal patchset to help to save some more power - put unused PCI Sync pci_[gs]et_powerstate_method with FreeBSD which makes things a little Make acpi support modules depend on acpi module. Turn power off for detached (module unloaded) PCI devices. No power down is Put IPV6_RTHDR_TYPE_0 define back until the world fixes itself. Add _SC_PAGE_SIZE as alias to _SC_PAGESIZE to make modern software pieces Attempt to fix the crash if accessing usb device(s) after unloading usb.ko. Update acpi_battery(4) related code to the latest one from FreeBSD HEAD. The devinfo(3) library provides userspace access to the internal device Welcome devctl(4) and devd(8). devctl(4)/devd(8) support in acpi_thinkpad(4). If a neighbor solictation or neighbor advertisement isn't from the Correctly handle Intel g33 chips and add support for g45 chips. Don't allocate space for empty banners. Makes me able to connect various How buggy this little piece of code could be? Repair strnvis() buffersize Add code to parse the utrace(2) entries generated by malloc(3) in a more Update sensorsd(8) to the latest code. Fix CVE-2008-3831. Affects the Intel G33 series and newer only. Bring in some fixes from FreeBSD. Amongst other fixes, like panics in debug We don't have /dev/audio support any more, but make it symlink to /dev/dsp Hopefully more bulletproof workaround to fix problems with SATA ATAPI Make apps using '#define _POSIX_C_SOURCE' compile. Install acpiio.h. Add hardware type value define for IP over firewire. Not used yet. Unbreak installworld. Sync libusbhid with other BSDs (breaks API compatibility). Sync usbhidctl Remove /usr/include/libusbhid.h. Hidetoshi Shimokawa (11): Add dcons(4), a pseudo console driver for FireWire and KVM interface. Use opt_dcons.h. Add support for eui64(5) to libc. dconschat - user interface to dcons(4) Add dcons(4) related manpages. Hooks to build dcons(4)/dcons_crom(4). Add dcons(4). Update FireWire device nodes. Add eui64.5. Preserve dcons(4) buffer passed by loader(8). Sync with FreeBSD-current: Hiroki Sato (6): Test commit. Replace IPv6 related manual pages that may have violated - Nuke #ifdef SCOPEDROUTING. It was never enabled and is useless now[1]. Move to ND_IFINFO(). Query A records before AAAA records in getaddrinfo() when AF_UNSPEC Fix a bug which can allow a remote attacker to cause denial Hiten Pandya (452): Add a handy macro, called FOREACH_PROC_IN_SYSTEM(p) for better Consolidate usage of MIN/MAX(). mdoc(7) assorted fixes: mdoc(7) assorted fixes: Oops, fix build before anyone realises. :-) Merge from FreeBSD: Fix building of vm_zone.c in the case of INVARIANTS. DDB updates: Bring the malloc/mbuf flags up-to-date with FreeBSD. Bring us in sync with 4.8-STABLE. Assorted mdoc(7) fixes: Kernel Police: Kernel Police: Comment out the unused proc/thread declarations. No need to comment out unused thread/proc declarations (prev. revision), Remove INVARIANT_SUPPORT conditional pre-processing on zerror(). Just style(9) police: Consolidate MIN() usage across kern/ tree. - Make `vmstat -i' working after Matt's interrupt related LINT breakage fix, part one: Generalise, and remove SI_SUB_VINUM; use SI_SUB_RAID instead. Get LINT to build. Use FOREACH_PROC_IN_SYSTEM() throughout. LINT cleanup: add <sys/systm.h> for printf() LINT cleanup: fix MAX difinition. Define HAVE_PPSRATECHECK macro, now that we have merged the LINT cleanup: remove redundant ``struct proc *p'' declarations. Add backtrace() prototype. DELAY() does not belong in a MD include file, instead, move Move the backtrace() function from kern_subr.c to kern_debug.c. Um, ok. I should have slept a little more. Backout my ``proc cleanup'' brain damage. Remove the documentation/ folder from under 'src' tree. Add the TINDERBOX kernel configuration file. Remove `YOUR'. Add two useful macros that I have been meaning to add for quite Fix style issue. Use addalias() to track the vnode if it not of a regular type. Optimize split(1) by using dynamic allocation for buffers. Add a `-q' option to killall(1). This is helpful when you don't Fix the case when `-p' option of truss can be passed the pid of Fix vmstat(1) diagnostic output. Fix sorta critical bugs in fsdb(8); the @mtime and @atime reporting Add device indentification for Intel 82801DC (ICH4) SMBus Controller. Add device identification for Intel 82801EB (ICH5) SMBus Controller. 1) Add new tunable, kern.syncdelay: Respect ps_showallprocs when using the Proc file system. Use vm_page_hold() instead of vm_page_wire(). Return a more sane error code, EPIPE. The EBADF error code is Introduce a new poll operation bit, `POLLINGIGNEOF'. It is used for Fix style nit. Pass only one argument to vm_page_hold() as a sane person would do. Check when M_PREPEND returns an empty mbuf. Change my e-mail. Fix logic, flow and comments for previous (rev 1.9) change. OK, fix build, while I am there, I will get some screw for my head Add MLINK mount_ufs(8) to mount(8). Mdoc cleanup of section 4: Add the linux(4) manual page. Fix IPFW2 build. K&R style function removal. Update functions to ANSI style. Sort Copyright order. Fix L2 internal cache reporting when it is an AMD Duron rev. A0 chip. K&R style function removal. Update functions to ANSI style. Rename: K&R style function removal. Update functions to ANSI style. Fix a spelling mistake. Security Fix: Correct unsafe use of realloc(). Fix compile when GUPROF is defined. Fix kldload(2) error return when a module is rejected becaue it is AMI MegaRAID Crash-Dumps support. Fix an ordering issue, call vm_map_entry_reserve() prior to locking Nuke the zalloci() and zfree() stuff sky-high. We no longer have Remove zalloci/zfreei from the Makefile too. Major contigmalloc() API cleanup: Second contigmalloc() cleanup: Fix build by not attempting to compile libc's malloc directly. Add SysV IPC regression suite. Use vnconfig(8) instead of mdconfig(8) to enable ${swapfile}, Update sysinstall's NFS module: Add the pim(4) and multicast(4) manual pages, following Fix a ``missing free''. Un-initialise (i.e free) the HPFS inode hash; previously, unloading Fix two bugs in split in split revealed after my optimization in Remove unneeded XXX, to rename lwkt_create() as it was named Add a prototype for if_indextoname(). Fix long interface name handling. Support for conflict (pkg_add -C) checking and pkg_tools (pkg_add -P). Add pkgwrap.c, which was missed in my last commit. Improve device identification strings. Fix typo in a comment. Fix build of the PPS driver. Remove the pca driver from the build for now, due to the revamp Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Update to devlist2h.awk and friends: Include Makefile.miidevs, so we can just do: Merge from FreeBSD: Merge from FreeBSD: NEWCARD: change `device card' to `device pccard'. Update maintainer contact information. Score a duh-point for myself. Change the remaining lines for the Merge from FreeBSD: Merge: FreeBSD (RELENG_4) uipc_socket.c rev. 1.68.2.24 Merge: FreeBSD (RELENG_4) isp_ioctl.h 1.1.2.5 Merge: FreeBSD (RELENG_4) i386/isa/psm.c rev. 1.23.2.7 Merge: FreeBSD (RELENG_4) netstat/inet.c rev. 1.37.2.11 Update the Broadcom Gigabit Ethernet driver and the Broadcom Remove my middle initial. Correct a filename typo, there is no such thing as machine/pcpu.h, Bring the cue(4) and miibus(4) manual page in line with Include thread.h if _KERNEL_STRUCTURES is defined. Do not print a warning about PIM sysctl node (net.inet.pim.stats) I just scored a few duh-points for myself. I committed an older version Bring the BFE(4) manual page up-to-date with FreeBSD RELENG_4. Merge from FreeBSD: Merge from FreeBSD: Merge from FreeBSD: Document that kldload(2) can also return EEXIST. Fix violating usage of M_DONTWAIT in calls to malloc() by replacing Adjust IPFW to use M_WAITOK instead of M_NOWAIT. The M_NOWAIT flag on Fix spelling mistake, s/itnerrupts/interrupts/. Convert the code to ANSI style, and remove 'register' keywords. Linux emulation system call update. Mega mdoc(7) update: Replace a manual check for a VMIO candidate with vn_canvmio() under Use info->td instead of curthread in ffs_reload_scan1(); although Collapse the if(...) block in pim_stats added by my previous commit Correct typo in comment. Turn TDF_SYSTHREAD into TDF_RESERVED0100 since the flag is never used Integrate remaining part of the network interface aliasing Integrate the remaining parts of the network interface aliasing Fix loading of the SMBFS kernel module. The KMODDEPS line in the SMBFS Give the VFS initialisation functions an update: Add manual page for the busdma(9) API. It has detailed information on Merge: FreeBSD (HEAD) sys/kern/sysv_sem.c rev. 1.69 Merge: FreeBSD (RELENG_4) ip_fw2.c rev. 1.6.2.19 Correct a bug in vm_page_cache(). We should make sure that a held Per-CPU VFS Namecache Effectiveness Statistics: The globaldata houses a pointer and not an embedded struct for nchstats; Add Makefile for the netif/ie ISA NIC driver. Adapt the netisr message handlers to accomodate the available error Garbage-collect unused variable. Add `device atapicam' to unbreak TINDERBOX config. Merge: FreeBSD (RELENG_4) aac_pci.c rev. 1.3.2.19 Merge: FreeBSD (RELENG_4) kern_descrip.c rev. 1.81.2.19 Merge: FreeBSD (RELENG_4) msdosfs_vfsops.c rev. 1.60.2.9 Merge: FreeBSD (RELENG_4) kern_event.c rev. 1.2.2.10 Merge: FreeBSD (RELENG_4) vfs_syscalls.c rev. 1.151.2.19 Add a KKASSERT to mount(2) to make sure we have a proc pointer. Rename the sysctl handler for nchstats to reflect reality; I named it The "Hashed Timers and Hierarchical Wheels: Data Structures for the Fix compilation of profiling. 1) Move the tcp_stats structure back to netinet/tcp_var.h. Make IP statistics counters per-CPU so they can be updated safely. Clean warnings under DIAGNOSTIC. Use the correct cast, ns_ifra_addr -> ns_ifaddr. Move around some #ifdefs to silence warnings. Add a forward declaration of 'struct uidinfo'. Bring the I4B layer up-to-speed with 64-bit physical addressing. Handle UIO_USERISPACE (just fallthrough to UIO_NOCOPY), to silence Just pass NULL to sync(), no need to create a `dummyarg'. Catch up with if_ioctl prototype changes (see rev. 1.10 of net/if_var.h). Bring le(4) up-to-speed with 64-bit physical addressing. netif/cx/cx.c: Correct pre-processor conditional surrounding mmxopt's declaration by Add bus_alloc_resource_any(9). KKASSERT that we require inp->inp_pcbinfo, in in_pcbinswildcardhash(). Add a readonly sysctl for the `kern.mmxopt' loader tunable (same name). Remove redundant newline in a call to panic(9). Remove newline from panic(9) message, it is redundant. Remove newline from panic(9) message, it is redundant. Add MLINK busdma(9) which points to bus_dma(9). Update the tsleep(9) manual page about our reality. Update the DELAY(9) manual page about the header file where Update the KASSERT(9) manual page to reality. Update the suser(9) manual page about reality. Correct mdoc(7). Remove unneeded empty line to silence mdoc(7) warnings. Mdoc(7) police: Mdoc(7) police: Remove erroneous use of the `Fl' mdoc macro and replace it with Correct the use of the .Dx/.Fx macro. Add entry for the CAPS IPC library. It is now possible to refer to the Correct the usage of the .Dx macro to avoid mdoc errors. Remove extraneous `.El' macro. Correct the usage of the .Dx macro. Correct usage of the `.Em' macro. Do not specify a macro as first argument to the literal macros In sodealloc(), use do_setopt_accept_filter() to free an accept filter Add a read-only sysctl for observing the maximum number of Remove an extra comma at the end of .Nm list. Add a manual page which documents the generic hash routines, i.e. Add an MLINK for KKASSERT(9). Document the ``resource management'' (rman) abstraction in rman(9). Quickly fix an MLINK while no one is looking... Update the mlock(2) manual page: It is DragonFly BSD, not FreeBSD. Correct config(8) files. Document the pmap_kenter_quick(9) function. While I am here, fix Fix SYSCTL description style. Wrap the VM MAP locking routines with _KERNEL, user-land has no Consolidate SYSCTL_DECL(_kern_ipc), move it to sys/sysctl.h as First pass at updating top(1): Adjust include path so that the machine.h in ${.OBJDIR} is used Force commit to clarify that the previous revision should have been Remove a stale comment: PG_DIRTY and PG_FILLED were removed in Remove an unimplemented advisory function, pmap_pageable(); there is Do not use the display function if the -o (opaque) or -x (hexdump) Remove '-*- nroff -*-'. Correct spelling. Merge from FreeBSD, RELENG_4 branch, revision 1.250.2.26. Plug a memory leak when the kernel initialiazes config_devtab resources Remove some long gone #defines, PFCLUSTER_BEHIND and PFCLUSTER_AHEAD; Quotactl(2) should set the uid correctly, based on the QUOTA type supplied. Cleanup the textvp_fullpath() function; summary of changes: VM Resident Executables update: Cleanup the manual page: Remove the compat macro textvp_fullpath(), and use vn_fullpath() Surround a multi-line conditional block with braces for readability. Lock manipulation of the 'exec_res_list', i.e., the list of resident Add a manual page which describes the vn_fullpath(9) function. Flush cached access mode after modifying a files attributes for NFSv3. Deprecate use of m_act, which is an alias of m_nextpkt; just use Allow top(1) to toggle display of processes+threads or *only* threads Re-arrange the 'Executable' column for the '-l' option so that long Add a manual page describing the LSI Fusion family of devices with a Discard the first 1024 bytes of output as suggested by Now that we have clients that use m_getcl(9), set the default mcl_pool_max Do not use the installed include files, instead, set the include path Implement POSIX.1-2001 (XSI)'s ulimit(3) library call. Add a reference to the ulimit(3) manual page. Register keyword removal. No operational changes. ANSI-fication. No operational changes. Dissolve use of the 'register' keyword. Use correct sentence structure. Remove useless __STDC__ junk. Avoid a memory leak if vfprintf(3) by always calling va_end(3); this Use ANSI C prototypes and remove the !__STDC__ varargs compatibility FUNLOCKFILE(fp) *after* modifying the FILE pointer's fields. Set the return value properly for fgetpos(3). Document security issues with gets(3) in a proper manner. mdoc(7) corrections; use .Dv instead of .Em etc, fix grammar. C99 update: freopen(3) with NULL 'path' argument so that it opens the Re-order include, 'sys' includes first, 'vm' includes after. Conditionally include the essential header files, sys/queue.h and Remove an accessory function called msf_buf_xio(); it is unnecessary for Replace the use of specially reserved pbufs in NFS's nfs_getpages() and IEEE Std. 1003.1-2001 dictates that fileno(3) behave as it locked the IEEE Std. 1003.1-2001 wants feof(3) to behave as if it locked the FILE If handed a bad file pointer that we can't write to, set the errno value Revert previous commit about FUNLOCKFILE(fp), it causes ill (and weird) Add a 'big fat comment' about the FUNLOCKFILE(fp) implementation and why Add some whitespace for clarity. No operational changes. Revert the locking of feof(3) for now; there is possibility of ill Do not produce a warning if the sysctl does not exist. This happens when If there was a cache hit, and the msf->m_flags was set to SFBA_ONFREEQ, Make the procfs_validfile() function static. Fix spelling in comment. Spell 'written' properly. Add a helper utility, called asf(8): Add Symbol File. Fix prototype of usage(). Add the csplit(1) utility, which splits files based on context, as IEEE Std. 1003.1-2001 (SUSv3): Hook up the recently added utilities [1] to the build. Add the POSIXv2 asa(1) utility; it interprets FORTRAN carriage-control Hook up the asa(1) utility to the build system. Zero the interval timers on fork(2) rather than copying them to the Zero-out the whole pstats structure and then copy the relevant fields, Move the 'p_start' field from struct pstats (Process Statistics) into the Use the kern.boottime sysctl for retrieving the system boot time as a Check kp (struct kinfo_proc *kp) against NULL and not 0, because it is a Remove references to FreeBSD in the comments. Fix indentation. Merge rev. 1.7 of FreeBSD's src/sys/dev/twa/twa_freebsd.c. Turn on the DDB_TRACE config option, so that we will get a trace as soon Merge from FreeBSD-4, revision 1.115.2.20: Merge from FreeBSD-4, revision 1.1.2.9: Add a debug directory under src/test, where we will house all of our debug Document the 'running_threads' GDB macro. Update our xlint(1) to work with recent preprocessor changes. Randomize ephermal source ports. Generalize a comment, remove 'FreeBSD' from the comment because we are Remove the advertising clause from this file. Use the official devised license for the first time, starting with Fix a small but important mistake. Bunch of mdoc(7) and various file path fixes. Add a bunch of .Fx (FreeBSD) version numbers. Correct mdoc(7) errors: Use the .Dv directive for marking up PROT_* constants. A variety of mdoc(7) and grammar changes: Correctly use the .Bd directive, i.e., present it with the -literal Just use the .Ev directive for printing an environment variable, Use the official 3-clause license for the MSFBUF header file. Update list of FreeBSD version numbers, for use with manual pages. BUF/BIO work, for removing the requirement of KVA mappings for I/O Correct reference to buf->b_xio.xio_pages in a comment. Append necessary information to the package name for portupgrade(8) to Remove an erroneous '+' symbol at start of 'rand_irqs'. Update a stale comment about lwkt_replymsg(). POSIX update and cleanups for getopt(3): Correct mdoc(7) processing errors; the .Bl directive should be provided Correct mdoc(7) for basename(1) and passwd(1) manual pages. Merge changes from FreeBSD: Stop depending upon an implicit 'int' as the return type of main(). Annotate the b_xio field member of the BUF structure. Change all files that I own to use the official DragonFly Project Merge revision 1.25 of src/usr.bin/time/time.c from FreeBSD. Merge revision 1.26 of src/usr.bin/time/time.c from FreeBSD, and an Eliminate hard sentence breaks. Eliminate hard sentence breaks. Double semi-colon police! BUF/BIO stage 2: Remove an erronous 'static' in front of pci_alloc_resource(9). Add a prototype for isab_attach(), which is used by the ACPI-5 ISA Bring definition of va_list and friends, so that ACPI module actually Eliminate hard sentence breaks. Merge revision 1.16 of src/usr.bin/rusers/rusers.c from FreeBSD. Minor KNF/style cleanups. No operational changes. Display proper information when the verbose flag (-v) is passed to Aesthatical change, move 'All rights reserved' to the same line as Include <sys/types.h> and <netinet/in.h>, so that parse.y gets the correct Fix compiler warnings; include <sys/types.h> and <netinet/in.h> to satisfy Mechanically kill hard sentence breaks. Respect locale settings from the environment. Add new options to select different registries, basically synch'ing us Assorted spelling fixes from Christian Brueffer <brueffer@freebsd.org>. It is 'estcpu', not 'setcpu'. Add the C99 utility, now that we have a decent C99 compiler in base, i.e. The return-path is optional in a headline, therefore don't skip a message Merge mdoc(7) corrections from FreeBSD -CURRENT. Merge revision 1.8 and 1.9 from FreeBSD -CURRENT, i.e., add a section on Aesthetic changes: Add MOD_SHUTDOWN to be processed by the module event handling function. Fix generation of opt_inet.h and opt_ipx.h by providing their targets Remove UIO_USERISPACE, we do not support any split instruction/data Remove VAX conditionalized code. Add the ieee80211(9) API manual pages. KNF/style changes. Match scanfiles() function prototype with the rest Minor cleanups to bring us on-par with FreeBSD's cat(1): Use the correct header file, which is located in netproto/802_11. KNF/style and warnings clean up. ANSI style prototype for printb(), and Major cleanup of the base IPFilter: Readd the $DragonFly$ Id tag which I removed by mistake in previous Correct spelling. Test Commit Add a SECURITY section which describes the kern.ckptgroup sysctl among Reorder included headers to an acceptable standard. System headers Tell the reader about sys/buf.h as well. Document lockcountnb(9) which is the non-blocking counterpart of Slap 2005 into the COPYRIGHT. Happy New Year! Slap 2005 into the copyright. Happy New Year! Improve the IPFilter rc.d file, mainly bring it in line with changes Move the MALLOC_DECLARE into sys/msfbuf.h header file. Remove a comment that does not make sense; it was just a remnant of Fix whitespace for function prototypes. Header include protection from userland. Rename the flags for sf_buf_alloc(9) to be in line with FreeBSD: Add a typedef msf_t for struct msf_buf *. Cleanup build warnings, cast and type modifications. Remove 'rttrash', it has been long removed from the routing code; former Remove stale inclusion of struct disklabel and other bits from Remove contents of include files from their associated manual pages thus Fix GCC3 related pre-processor issues. Fix mistakes from previous commit, cleanup _NCH_ENT macro. Conditionalise include of "opt_ktr.h" under _KERNEL. Fix display of code in the EXAMPLE section; do not enter vertical space Bring in the ktrdump(8) utility from FreeBSD. Update manual page with the version in FreeBSD, r1.8. Update manual page with FreeBSD's r1.7. Fix $FreeBSD$ ident string. Add a new function to the nlookup API, called nlookup_set_cred(9); this Provide a better annotations for KTR_VOP and KTR_NFS trace classes. Add manual page for closefrom(2) system call and hook it to the build. Add entry for DragonFly 1.2. Add standard entries for libcaps, libkinfo and libkcore; they can now be Add 'mycpuid', a #define for accessing mycpu->gd_cpuid. Mechanical cleanup of TCP per-cpu statistics code, better naming etc. Remove the '_GD' macro hack: Forced commit to note previous change, of introducing 'mycpuid' was Mechanical cleanup of IP per-cpu statistics code, better naming etc. Ick! Seems I was under a placebo, correct a last minute typo. Use the correct macro for printing TCP statistical counters when we only Add bpf_mtap_family(9) which allows the client to specify an address Fix whitespace in copyright dates. Be consistent for preventing redundant header inclusion. Change CPU time statistics (cputime) to be accounted on a per-CPU basis. Adapt the KINFO library to aggregate per-cpu cputime statistics. KINFO library cleanups: Fix breakage. Adapt kcore_get_sched_cputime(3) to retrieve and present an aggregated Minor adjustment of types for consistency. Use reallocf(3) and cleanup some NULL checks. Add some useful comments to interface functions; required a little bit of Networking routing statistics on a per-CPU basis: Correct typo in comment for vshiftl(). Covert netproto/ipsec into using critical sections instead of SPL ops. The BLIST API is just as usable in userland as it is in the kernel; and Minor word-smithing. Use _KERNEL macro for wrapping kernel-only code. Add code comments to improve object type documentation. Add counters for recording Token/MPlock contention, this would help in Correct a typo, v_rbdirty_tree is for dirty buffers. Remove conditional bits about other operating systems, they are not Update the unifdef(1) utility to the latest code from FreeBSD, and the Remove outdated information with regard to old tinderbox. Clean the VFS operations vector and related code: Add some useful GDB macros that have been sitting in my local tree, Add minimal manual page explaining use of bread(9) and bwrite(9) kernel Remove items from TODO list that have been implemented in last major *** empty log message *** Forced commit to ensure revision 1.18 fixed a build break introduced Update the physio(9) manual page to reflect reality. Clean up the Update description of msfbufs sysctls. Update filename in comments. BUF/BIO cleanup 2/99: Add flag for compiling the tests without GCC -O2 optimisations, quite Add test for comparing following types of conditionals: Add a workaround to make 3COM cardbus cards to propagate the device BUF/BIO cleanup 3/99: Bring name of an unused flag field in line with the rest. Put unused flag space definitions back to their original position in Initialize buf->b_iodone to NULL during bufinit(9) stage. BUF/BIO cleanup 4/99: BUF/BIO cleanup 5/99: Update copyright notice. Use standard DF copyright. Add 'debug.sizeof.buf' sysctl for determining size of struct buf on a BUF/BIO cleanup 6/99: A better description for 'debug.sizeof' sysctl. BUF/BIO cleanup 7/99: Add minimal utility that is able to make sense of the per-cpu load Move the bswlist symbol into vm/vm_pager.c because PBUFs are the only Remove stale comment about vm_mem_init's arguments. Make a few PRINTF lines readable, break them up if necessary. Whitespace cleanup. Re-word some sysctl descriptions, make them compact. Wrap 'pqtype' variable with INVARIANTS so annoying GCC warnings are Move bio_lblkno (logical blockno in a file) field back to its rightful Remove PV_* flags from PMAP MD header files; these were made useless Update the TWE 3ware/AMCC driver code, bringing in all the fixes made Remove NO_B_MALLOC preprocessor macro, it was never turned on, and Add more documentation comments to disk_create() and dscheck(). Style: break line into two, so it fits nicely in 80-column mode. Add a comment on top of ad_start, mentioning that it is called with Document the dscheck(9) function and explain how it affects the slice- Jeffrey Hsu (278): Add support for RFC 3390, which allows for a variable-sized Use relative directory, rather than /sys, as base directory for "make tags". Implement the Eifel Dectection Algorithm for TCP (RFC 3522). New statistics to keep track of supurious retransmits. Fix spurious spelling within comments. Make the logic clear on when to use Eifel detection or fall back Add statistics to disambiguate how a spurious retransmit was detected. Decouple slow-starting an idle connection from Nagle's algorithm. Non-semantic-changing cosmetic transformation. Gets rid of unnecessary Properly handle an error return from udev2dev(). Fix typos in comments. Add support for Protocol Independent Multicast. Add support for Protocol Independent Multicast. Account for when Limited Transmit is not congestion window limited. Differentiate between send and receive window variables. Introduce the DDB_TRACE kernel config option to automatically print a stack Non-semantic changing cleanups: Centralize if queue handling. Add missing interface queue. Leftover netisr consolidation cleanups. Leftover netisr consolidation cleanups. Leftover netisr consolidation cleanups. Leftover netisr consolidation cleanups. Reset the retransmit counter when setting the timer on a failed Optimize out an unneeded bzero(). For performance reasons, kernel sends should not be subject to sockbuf Unroll obfuscated loop. Unravel a nested conditional. Remove illegal identifier after #endif. This patch improves the performance of sendfile(2). It adds a hash Pull the sf_buf routines and structures out into its own files in Merge from FreeBSD: Merge from FreeBSD rev 1.43 (original commit made by tjr@FreeBSD.org). Merge from FreeBSD: Remove dead code. Remove unused local variable. Cosmetic code cleanup. Relax a KASSERT condition to allow for a valid corner case where Cosmetic changes. Split out wildcarded sockets from the connection hash table. A UDP socket is still bound after it is disconnected, so we need to Introduce access methods for making protocol requests. Once we distribute socket protocol processing requests to different Propagate curproc removal changes to files compiled by LINT. Remember the next lowest power of 2 of "npus" in "ncpus2". Use power of 2 masking to make packet hash function fast. Verify code assumption on number of processors with a kernel assertion. Dispatch upper-half protocol request handling. Dispatch upper-half protocol request handling. Correct double increment of the inp generation count. Use 0 for integer value rather than NULL. Change the "struct inpcbhead *listhead" field in "struct inpcbinfo" Eliminate the use of curproc in route_output() by passing down the process id Remove unused second argument to ip_stripoptions(). Send UDP packets out without a temporary connect. Cosmetic changes. Implement Early Retransmit. Print out Early Retransmit statistics. Include <sys/types.h> for autoconf/automake detection. To comply with the spec, do not copy the TOS from the outer IP Partition the TCP connection table. Clarify strange ipfw byte ordering convention. Make tcp_drain() per-cpu. Make tcp_drain() per-cpu. Ifdef out unused variable. Cosmetic cleanup. Consolidate length checks in ip_demux(). Eliminate use of curthread in if_ioctl functions by passing down the Do all the length checks before returning even if "ip_mthread_enable" Eliminate use of curproc and curthread by propagating thread pointer down Need header file to deference proc structure. Directly call pru_control until copyin problem is resolved. Give UDP its own sosend() function. Pull out m_uiomove() functionality from sosend(). Change sendfile() to send the header out coaleseced with the data. Only enter wildcard sockets into the wildcard hash table. Only enter into wildcard hash table if bind succeeds. Only enter into wildcard hash table if bind succeeds. Remove the ip_mthread_enable sysctl option. Enough code has been converted Consolidate length checks in ip_demux(). Fix byte-order. Dispatch reassembled fragment. Consistently use "foreign" and "local", which are invariant on the Cosmetic changes. Workaround for not having a proc context. Use the thread0 context when Push the lwkt_replymsg() up one level from netisr_service_loop() to Fix typo with last minute change in last commit. Add header file to pull in the setting of the TCP_DISTRIBUTED_TCBINFO option. Send connects to the right processor. Add predicate message facility. Make the declaration of notifymsglist visible outside #ifdef _KERNEL Always send the sendfile header out even if the file has no data. Fix compilation errors with missing header files and misnamed formal parameter. Silence warning about missing prototype. Silence compiler warning by adding include files. Create another entry point into ip_input() so MT_TAGs will work. Don't need opt_tcp_input.h for TCP_DISTRIBUTED_TCBINFO anymore. Cosmetic changes. Allow an inp control block to be inserted on multiple wildcard hash tables. Pass more information down to the protocol-specific socket dispatch function Use a message structure off the stack for a synchronous call. Drop packet if the length checks fail in ip_demux(). Replicate the TCP listen table to give each cpu its own copy. The default protocol threads also need the check for Cosmetic changes. Remember if an inpcb was entered into the wildcard table to save Add restrict keyword to string functions. Move accounting of sendfile header bytes sent down one level to handle Trade off more writes for a simpler check for when to pull up snd_recover. Put snd_recover in the same cache line as snd_una. Make room in the Panic in udp_output() if a socket is found in an inconsistent state. Allow an inp control block to be inserted on multiple wildcard hash tables. Detect and foil optimistic ACK attack with forced slow-start Close race condition in accept(2). Try the ELF image activator first. Update some of my copyright notices before we officially publish Add the standard DragonFly copyright notice to go along with mine. Add the standard DragonFly copyright notice to go along with mine. Increase the size of the nfsheur hash table as pointed out by Readability changes, mostly removing the option to not do NewReno, Fix bug with tracking the previous element in a list. Get cosmetic changes out of the way before committing SACK. Move a comment to the right place. Allow the syncache to run lock-free in parallel on multiple processors Update includes now that the Fast IPSec code has moved to netproto/ipsec. From KAME freebsd4/sys/netinet/ip_input.c rev 1.42: Remove duplicate comment. Separate out the length checks from IP dispatch and also do them along Correct use of the flags argument to the recvmsg system call. Implement SACK. Fix bug with wrong length being used when coalescing out-of-order segments. Properly propagate the FIN flag from the following to-be-coalesced segment. We have to replicate listening IPv6 sockets in the wildcard table Handle window updates inside header prediction to increase the hit rate. Accept resets sent while the receive window is zero. Cache a pointer the last mbuf in the sockbuf for faster insertion. Merge from FreeBSD: Clean up routing code before I parallelize it. Clean up routing code before I parallelize it. Patch up user/kernel space difference with boolean types. Fix whitespace. Clean up the routing and networking code before I parallelize routing. Fix problem with last commit that was breaking tcpdump. Clean up the networking code before I parallelize the routing code. Fix off-by-one error with the range check for PRC_NCMDS. FreeBSD PR: kern/54874 Fix buffer overflow bug involving inet_ntoa(). Forced commit to say the previous commit wasn't really a buffer overflow Fix compile error. Add the new ND6_IFF_ACCEPT_RTADV flag to control whether to accept Add the "accept_rtadv" interface option to specifiy whether to accept Document the "accept_rtadv" interface flag. Back out port randomization. FreeBSD users report problems with it under load. Fix double-free problem when sysctl net.inet.ip.rtexpire=0. Fix double-free problem when sysctl net.inet.ip.rtexpire=0. Correct a byte-order bug with fragment header scanning. Set ip6_v6only to true by default. The administrators who want to use Cosmetic cleanups. Move a global variable into local scope for MP safety. Fix compile error. Now that I understand the poorly written BSD routing code and what Catch up to recent rtlookup() changes. Fix copyright notice. Remove the sysctl options for altering the initial TCP congestion window size. Remove the sysctl options for altering the initial TCP congestion window size. Increase the default TCP maximum segment size from 512 to 1460. Instead of explicitly initializing "fp" to NULL in kern_sendfile(), Now that we generate the ethernet header in place in the mbuf instead Code cleanup. Refactor some functions. Push some globals into local scope. If dhclient fails, an interface could be left with an IP address of 0.0.0.0. Temporarily disable non-working Path MTU discovery pending real fix. Now that 'so_pcb' is properly declared as a 'void *', remove a layer of Eliminate conditional check for initialized 'fp' on error in kern_sendfile(). Strip away convoluted route reference counting logic. Clear up confusion about negative route reference counts. Use malloc(M_ZERO) in preference to separate bzero() after allocation. Convert the struct domain next pointer to an SLIST. None of the callers of rtredirect() want to know the route that was modified, Remove (void) cast before a function call with an unused return value. Readability changes. Change a 'char *' to a 'void *' because that field is not accessed Cosmetic changes only. Prefer rtlookup() to rtalloc() when not saving the result of the look up. The route in a syncache entry is cleared if the connection was successfully Minimal patch that allows Path MTU discovery to be turned back on, but Give some guidelines on when to turn on Path MTU Discovery. Take into account the number of SACKed bytes skipped when slow-starting Use fixed-width type to ensure correct wraparound for Fix confusion with wrong route reference count being decremented. Better byte packing for struct tcpopt. Ensure that Limited Transmit always sends new data, even after a When doing Limited Transmit, don't retract snd_nxt if it was previously We can only do upper-layer protocol length checks on the first fragment. Eliminate a redundant variable assignment. Keep a hint for the last packet in the singly-linked list of packets A kludge to always give the driver a second chance to attach the cbb device. Use a larger initial window size when restarting after a long idle period. Minor cosmetic cleanups. Use the canonical name "ro" for a variable Defer assigning to the forwarding route variable until the forwarding Prefer TAILQ_EMPTY() to null-check on the result of TAILQ_FIRST(). sbappend() is called by stream-oriented as well as record-oriented Strip away a layer of indirection. Now that we properly declare Cosmetic changes, mostly changing zeros to NULLs. Clean up some of the sockbuf append code. Implement TCP Appropriate Byte Counting. Remove redundant assignment. Prefer m_getcl() to separate calls to MGETHDR() and MCLGET() in order to Fix typo with last commit. Deprecate MCLGET() in favor of m_getcl() or m_getl() in order to Use m_getl() to get the right sized mbuf. Rename local variable for clarity. Simplify the interface to m_uiomove(). Deprecate MCLGET() in favor of m_getcl() or m_getl() in order to Prefer the clearer m_getc() API over m_getm(). Generic cache of pre-initialized objects. It uses per-cpu caches Re-implement the mbuf allocator using the object cache. Convert to use m_getl() in order to take advantage of cluster caching and Allocate the right type of mbuf to begin with rather than switching types Should have allocated a mbuf packet header to begin with. Only allow packet headers to be copied into a packet header mbuf. Get an mbuf packet header to begin with instead of getting an mbuf and then Only duplicate packet headers into mbuf packet headers. A packet header without any packet tags is still a packet header. The header type of a mbuf doesn't change when appended onto a chain. Preserve the target M_EXT_CLUSTER flag when duplicating a packet header. Also preserve all the non-copied flags in the target mbuf when duplicating a Deprecate MCLGET() in favor of m_getl() in order to take advantage Fix typo that turns out to be harmless by accident, as MT_HEADER and Deprecate MCLGET() in favor of m_getcl() or m_getl() in order to Use m_gethdr() instead of m_get() to get a mbuf header. Deprecate MCLGET() in favor of m_getcl() or m_getl() in order to Replace the linear search in file descriptor allocation with an O(log N) Document the standard and historical behavior that open(2) returns The proper way to check for a normal mbuf cluster is with the A machine-independent spinlock implementation. It has the advantages of Add a space to the output for legibility. Check the IP length first to avoid a memory leak later. Fix indentation with previous commit. Now that the C language has a "void *", use it instead of caddr_t. Give each CPU its own taskqueue thread so per-cpu data can be Make zalloc() and zfree() non-blocking for ZONE_INTERRUPT zones. Prefer the general purpose m_getl() routine for mbuf allocation. Zero out stack memory before copying out to requesting process. Correct test for fragmented packet. Explicitly mark places in the IPv6 code that require a contiguous buffer. We're guaranteed m_pkthdr.fw_flags is already zero on allocation. Combine two allocations into one. Clear up mbuf usage statistics. Remove unneeded assignments. Fix missing malloc -> kmalloc conversions. Never dereference a NULL pointer. OpenBSD rev 1.66: Cosmetic cleanups. Refactor internal ip6_splithdr() API to make it more of a pure function Eliminate a macro layer of indirection to clear up the control flow. Cosmetic changes. Cosmetic changes. Remove nested block. Fix compilation error with IPSEC. Localize some variables. Clean up code. Fix userland compilation error. Apply FreeBSD rev 1.9: Apply FreeBSD rev 1.15: Apply FreeBSD rev 1.16: Apply FreeBSD rev 1.6: Apply FreeBSD rev 1.17: Clean up code. Add support for SSDT tables. Same as FreeBSD rev 1.27: Same as FreeBSD rev 1.19: Add some comments. Same as FreeBSD revs 1.28 through 1.30. Just bump version numbers due to null offsetting changes from FreeBSD: Merge FreeBSD revs 1.20 through 1.21: update list of supported tables. Sync up to FreeBSD rev 1.23: Add support for parsing MCFG tables. Add split on whitespace functionality. Jeremy C. Reed (9): testing cvs commit Create a manual page link of passwd.5 for new master.passwd.5. Add master.passwd as a name of this manual page. Only define atop for _KERNEL or _KERNEL_STRUCTURES. Document that the caret works like the excalamation mark Fix typo or mispelling. Fix typo. Comment out line about UDF specific mount options -- none were listed Add patch from 9.3.5 to 9.3.5-P1. This is for adding randomization Jeroen Ruigrok/asmodai (539): Clean up two lint warnings, by adding /* NOTREACHED */ in the appropriate Remove kerberosIV from the build. Correct spelling of Matt's surname. Install math.h. Remove math.h from Makefile, it gets installed through lib/msun/Makefile. Change __signed to signed. Remove kerberosIV a.k.a. eBones. It has served well, but definately Change __volatile and __const into volatile and const. Add, hopefully, support for the ICH5 sound hardware. First stab at our own UPDATING file. Remove reference to FreeBSD's documentation about their source trees. Add README for the documentation directory. Get rid of __const. Update the manual page to reflect the normal definitions. Remove more __const occurences. Add the RIPEMD-160 header file. Fix dependency that long is always 32 bits. This does not work on (I)LP64 Remove 'register' keyword while I am here. Get rid of __P, the days of K&R support are long past. Make sure a DiskOnKey never sends a sync_cache. This fixes the error Linux emulation has been working well for a while now, remove notice. Allow NVIDIA's nForce 2 chipset to use proper ATA DMA modes. Add OHCI support for the NVIDIA nForce 2 chipset. Add NVIDIA's nForce2. Reorder to alphabetical order. Add the AMD-766, Apple KeyLargo, and SiS 5571. Add a bunch of Intel USB controllers. Add NVIDIA nForce3 OHCI USB support. Add the NVIDIA nForce3 chipset. Change 'nvidianf2' to 'nforce2' to be more explanatory. Make our manual pages report that they are part of DragonFly. Get rid of DDB, INVARIANTS, and INVARIANT_SUPPORT in the boot floppies. Change # comments to /* */ equivalents. Properly detect and print the nForce2 Host to PCI bridge and PCI-PCI Add nForce AGP support, taken from FreeBSD with some minor changes to get Add identifier for the nForce2 PCI to ISA bridge. Remove the matching for the host to PCI bridge, since this is actually Add another vendor who delivers PL 2303 derived cables/convertors. Add -ifstat functionality to systat. It shows the network interfaces and Add protocols 134 and 135. Add Subversion's port number. Add Veritas NetBackup. Make sure to skip the PC98 architecture on the OPTi detection, since it Properly document getenv()'s return values using a slightly altered patch. Add __DragonFly_version, set to 100000, which basically means: Change wrapping definition from ending in DECLARED_ to DECLARED. Convert files from DOS to Unix. Add FreeBSD's makeobjops.awk. Rework the logic in the kernel config files. Add forgotten semi-colon. Factor out the object system from new-bus so that it can be used by Fix misplacement of code. Due to additional DF code everything got Add details on how to get a "checked out" source tree. Spell initialise correctly. Use unsigned integers for the counters, since they cannot be negative Remove the archaic wd(4) driver and its dependencies. Remove the archaic wd(4) driver. Properly spell compatible and compatibility. Use malloc() + M_ZERO instead of malloc() + bzero(). Remove haveseen_iobase(), it is not in use in the kernel. Add proper $FreeBSD$ identifier. Add two more awk kernel build scripts from FreeBSD. Remove KTR damage. Also revert the VI_UNLOCKED damage. Change $FreeBSD$ to $DragonFly$ on the output files. Get rid off the POSIX real-time extensions as well as the System V IPC and Temporary hack out release.9, which creates the floppies. Add support for the ICH 4 mobile chipset. Get rid off FreeBSD mirrors and add our own three (in Germany, Ireland, and Add forgotten newline in debug output. Add PFIL_HOOKS functionality. This allows us to plug in many firewalling Add the packet filtering files. Comment PFIL_HOOKS since it should not be needed in GENERIC. Add entry for the EHCI controller, pretty common these days (especially Replace the boring beastie with a dragonfly. Synch up with FreeBSD 5 with clean up changes. Add support for SoundBlaster Audigy and Audigy 2. Add PCI identifier to match against a Dell OEM version of the SB Live! Make sure we identify our toolchain as DragonFly, not FreeBSD. This is Make sure the vendor identifier matches the right project. The binutils Get rid of PZERO. This was removed about the first of August. Spell 'separate' and its siblings the way it is supposed to. Spell 'weird' the way English expects it. Synchronise partially with NetBSD's umodem.c v1.46: Synchronise partially with NetBSD's v1.85: Make sure cvs reports the proper OS. Update for the new version of CVS. K&R -> ANSI C conversion. Update config.h. Update with new source file targets. Fix typo: CAPF_ANYCLIENT->CAPS_ANYCLIENT. Synchronise with FreeBSD: Add support for the following AC'97 codecs: Get rid of question mark, the vendor ID is correct. Fix name of 'Silicon Laboratories'. Recognise the Asahi Kasei AK4544A and AK4545 AC'97 codecs. Recognise the Realtek ALC850. Add support for the Wolfson WM9711L and WM9712L. Add support for the Wolfson WM9709. Fix typo to make the TLC320 work. Add Texas Instruments' TLV320AIC27 AC'97 codec. Add support for the Conexant SmartDAA 20463. Fix the entry for the CX20468. The base default is 28h, not 29h. Fix misunderstood commit. The CX20463 is not the AC'97 codec. It _is_ Add missing comma. Add recognition code for the SiS 645DX. Add detection code for the SiS 746 ATA133 controller. Update to latest version. Cut our umbilical cord from mother FreeBSD. Add missing . before An macro at the end. Add -S flag for C99 support. Synchronise with NetBSD: get rid of __STDC__ selective compilation. Synchronise with NetBSD: ANSIfy. Synchronise with NetBSD: ANSIfy. Add support for CS4294. Add ATI Radeon RV280 9200. Forced commit to, belatedly, note that this (rev 1.1) was: Add proper entropy pool scripts and rc.conf lines, as it was. Add a clarification comment stating that Avance Logic's IC products are Add detection support for the Avance Logic (Realtek) ALC203 and ALC250. Sync with FreeBSD v1.16: Add support for the Texas Instruments IEEE 1394 controllers designated by: Change identifier display text to include all models supported. Add detection code for the Intel 82372FB IEEE 1394 OHCI controller. Add support for the Adaptec AIC5800 based IEEE 1394 cards. Detect the National Semiconductor Geode CS4210 OHCI IEEE 1394 controller. Add the SiS "7007" OHCI IEEE 1394 controller. Fix the vendor id string of NatSemi to its correct one. Add detection support for the Intel ICH6 chipset. Add support for the 82801FB UHCI/ICH6 controller. Add Intel 82801FB EHCI/ICH6 controller. Update per Intel 82801FB errata: Add Intel 82801FB/FBW/FR/FRW PCI-LPC detection code. Update string to show this device is the Hub-PCI bridge for ICH2, 3, 4, 5, Document addition of ICH6 UHCI. Clarify VIA Fire II identifier string. The device in question is the Sony CXD3222, not the CX3022. Add the Sony CXD1947, which seems to be used in some Sony Vaios. Clarify the Sony strings. Correct dumb copy/paste of existing line mistake to correct name. Update comment to point to the right file (src/sys/bus/pci/pcireg.h). Add rndcontrol. Update the PCIS_ definitions per the PCI 2.3 specification. Add PCIS_ definitions per PCI specification 3.0. Correct some PCIS to PCIC and PCIP prefixes. Update pciconf to print the recently added categories. Add identifiers for Serial Attached SCSI per dicussion with the PCI SIG Inspired by rev 1.33 of FreeBSD, but which was not documented: Document PCIY_xxx. Make unit_adjust() static as per its prototype. Garbage collect RETAINBITS, not used anywhere. Use static on file scoped function prototypes and variables. Make f_stream static. Make scanfiles() static since it is limited to the file scope. Also mark the function itself static, not just the prototype. Make setthetime() static per the prototype. Garbage collect two unused variables (SaveFs, CountCopiedBytes). Mark the variables defined at file scope static. Mark filescope functions static. Actually mark the function itself static as well. Rework the WARNS levels per FreeBSD's CURRENT source. Make usage() static. Raise WARNS to level 6. Correct usage of the cvsup command. Use err() instead of a perror()/exit() combination. Move a perror() to warn(). Use proper ANSI function definitions. Fix two sign comparison mistakes. Make the second argument of interval() const, since it is not changed Initialise to NULL to silence gcc. Add WARNS and set to level 3. Hook c99 up to the build for usr.bin. Bump version number for the 1.1-CURRENT tree. Remove stray fr_checkp() declaration. Use stronger wording against using 'register' and '__P()'. Get rid of varargs.h. Get rid of varargs.h. Get rid of the CFLAGS with traditional-cpp, it compiles fine without. Get rid of varargs.h. Add Pentium 4 Thermal Control Circuit support. Get rid of varargs.h. Actually add the main file for Pentium 4 Thermal Control Circuit support. Get rid of varargs.h. Update mk files list with the current supplied one. Add the latest source versions of OpenBSD's traceroute program (which came Add proper prototypes for dump_packet() and pr_type(). Use proper ANSI prototypes and make sure all arguments are supplied in the Use proper EXIT_FAILURE/EXIT_SUCCESS for exit() calls. Make all the local functions static. Split off code in print_all_info() into print_batt_life(), print_batt_stat() Remove question mark (?) case statement. Fix function definition {} placement according to our own style. Add the long overdue ehci(4) manual page. Update to reflect DragonFly reality. Add a temporary hack to avoid the local files to be picked up. Add a commented out STRIP variable to show people how to make sure installed Clarify the 's' and 'msg' arguments and note how send() and sendto()'s msg Remove a redundant call to gettimeofday(). Rework the wording in a different way. For send() and sendto() change Change rcvar from "mixer" to `set_rcvar`. Add updated Lithuanian locale (lt_LT-ISO8859-13). Update zoneinfo database with the latest information. Correct typo: mv -> nv. Get rid of COPY, its functionality has been superseded by install -C. Add first stab at Faroese locale. Get rid of the alpha entries. Forced commit to note that site_perl/5.005/i386-freebsd is now: Merge from vendor branch OPENSSL: Add OpenSSL 0.9.7d. Use proper filenames for the Faroese locales. Add missing entries for fo_FO, lt_LT, Argentina. -1, not -15. Add 136: UDPLite [UDP for error prone networks] Merge from vendor branch OPENSSL: Add OpenSSL 0.9.7d. MLINK sata(4) to ata(4). First stab at getting the Silicon Image (SiI) SATA controllers 3112 and Synchronize with our current code. Fix it properly. Switch from OpenSSL 0.9.7a to OpenSSL 0.9.7d. Commit manual pages after running 'man-update' and add new manual pages. Update per latest manual pages. Update per latest manual pages after running 'man-update'. Update per latest manual pages after 'man-update'. Change hackers@freebsd.org to kernel@crater.dragonflybsd.org. Add comment markers to avoid the same stupid mistake as I made. Get rid off the host.conf to nsswitch.conf conversion. Update per recent newsletters of the ISO3166 committee. Belatedly remember that in Nordic languages the aa, ae, and similar letters From NetBSD 1.11: Merge from vendor branch BIND: Add BIND 9.2.4rc7. Remove a '+', remnant of patchset. Switch to 9.2.4rc7. Fix typo of->pf. Unbreak addump(). Synchronise with FreeBSD-CURRENT as of 2004-09-26. Fix spammage introduced by dillon's commit in r1.5. Add WARNS, set to 3. Make WARNS ?= instead of = per all the other Makefiles. Bump WARNS to 6. Add WARNS, set to 6. Add WARNS and set to 6. Add WARNS and set to 3. Add, if not already present, WARNS and set to 6. Set WARNS to 6. Add WARNS and set to 5. Set NCURSES_CONST to const. Make ls compile under WARNS 6. Bump hostname to WARNS 6 by initialising silen. Add, if not present, WARNS and set to 6. Bump WARNS to 6. Add WARNS, set to 0. Bump WARNS to 6. Bump WARNS to 6. Regenerate. Fix author name with .An macro. Add missing period. Bump to WARNS 6. Bump WARNS to 6. Forced commit: Add Wacom Graphire 3. Regenerate. Bump to WARNS 6. Be consistent in the white space usage. Add some more vendors and some HP devices. Regenerate. Change SYNOPSYS to SYNOPSYS2 and regenerate. Change to use USB_PRODUCT_LINKSYS2_USB200M. Add the Unicode 3.2 ctypes table. Prepare for the locale additions. Add en_GB.UTF-8. Add UTF-8 time definitions. Be a coward and add ja_JP.EUC back in for now. Add la_LN.UTF-8. Add collation definition for UTF-8. Fix consistent mistake: CP1252 -> CP1251. Add zh_CN.GBK. Add Hye-Shik's UTF monetary defitions. Add Hye-Shik's UTF-8 message definitions. Incorporate Hye-Shik's work for numeric UTF-8 definitions. Correct BASE_LOCALEDIR to .. Correct the Polish locales per the submission of Bodek <bodek@blurp.org> Add sr_YU.ISO8859-2/5. Correct non-matching comment. Add definitions for _CTYPE_SW[0-4MS]. Add more locales: Use rune.h instead of runetype.h. Add omitted space after 'while'. Add NetBSD 2.0 and DragonFly 1.1. Add iso-C alias and p1003.1-2004 definition. Document ENOMEM error case, note how strdup() is 1003:2004 sanctioned. Remove leftover 'is'. Merge FreeBSD SA 04:16 security fix. Add IDs for the following: Regenerate. Add more detail to the 845, 865, and 915 family. Regenerate. Major cleanup and expansion of the NVIDIA id section. Regenerate. Merge from vendor branch OPENSSL: Add OpenSSL 0.9.7e. Add some nForce2 identifiers. Regenerate. Remove obsolete/unused file. Merge from vendor branch FILE: Add file 4.12. First stab at file's libmagic. Use the more commonly used SRCDIR instead of SOURCEDIR, does not seem to Add libmagic and reorder/restructure the list of the libraries to be build. Add LIBMAGIC. Use a space instead of a tab. Use spaces after the CFLAGS assignment as well. Simplify file to be a binary linking to libmagic (which contains the real Move the Magdir handling from usr.bin/file to here and simplify. Add libmagic to _prebuild_libs Revert from previous commit, seems to have to be solved elsewhere. Merge from vendor branch BINUTILS: Add binutils 2.15. Switch from OpenSSL 0.9.7d to 0.9.7e. Regenerate the manual pages after the OpenSSL update to 0.9.7e. Move from K&R function declaration to ANSI. Let the allocation of registers be done by compilers nowadays. The average Merge from vendor branch CVS: Add CVS 1.12.11. Add missing backslash. Remove file from build_tools, since it serves no build tool purpose. Add binutils 2.15 directories. First stab at bmake glue for binutils 2.15. Match GCC's configured target. Regenerate manual pages. Add ICH5 10/100 Mbit interface id. Get rid of unused hardware targets. Retire Alpha bits. Get rid of hardware architectures we do not use. Add abi.h which defines particulars for x86 and AMD64. Make sure to include header files from -I${ARCH} directories, in this case Replace the files with the NetBSD ones, which are the rewritten ones by libm hasn't been used in ages. msun has taken over its place for a long Add more files to the architecture specific section. Use ANSI C and get rid of the __STDC__ and other wrapping. Prefix -I's path addition with ${.CURDIR}. Get rid of an Alpha specific part. Prefix function names with __generic_ as per the rest, to remove the Add NetBSD's ieee754.h header file. Include sys/types.h for now. Add ieee.h files for amd64 and x86. Add lrint() and associated functions. Add matching line for ATI Radeon RV280 9200SE. Get rid of the register keyword. Add icmp6 alias. Update comments for the Belarussian locale. GCC supports two pseudo variables to get the function name, __FUNCTION__ Get rid of the #define mess for other platforms. We're just Unix. Get rid off conditionals for hpux, AIX, THINKC, TURBOC, MS_DOS, VMS. Get rid of the Alpha specific manual pages. Do not create Perl 5.00503 directories anymore since it is removed from base. Get rid of [cat|man]1aout. Remove [cat|man]1aout directories. Fix HISTORY by using a proper .Dx macro. Fix extraneous spacing of .Dx by changing two unneeded tabs to spaces. Add the additional space to make mount_udf.8's .Dx macro work for real. Get rid off the PC98 support. Use .Dx macro. Get rid off PC98 conditional code. Get rid off PC98 conditional code. Get rid off Alpha mentions. Add va_copy() implementation (thanks to Chris Torek's comp.lang.c post). Replace Turtle references. Merge from vendor branch CVS: Add CVS 1.12.12. Change EXIT STATUS to DIAGNOSTICS. We have standardised on the latter. Add gperf 3.0.1. Merge from vendor branch GPERF: Update to 3.0.1. Synch our GNU_PREREQ() macro with FreeBSD: __pure__ is supported from 2.96 onward, not 3.0. Get rid off the wrappers around __va_copy(), they serve no real purpose. Remove dllockinit.3 from the Makefile. Back out getloadavg()'s change from int to size_t. This breaks gmake, for Detail thread-safety conformance. ANSIfy. Add fsblkcnt_t and fsfilcnt_t. Add blkcnt_t and blksize_t per IEEE Std 1003.1, 2004 Edition. Add id_t, a general identifier type, per IEEE Std 1003.1, 2004 Edition. Add first stab at a statvfs.h. Sync to FreeBSD 1.14/1.15: Fix last two return() calls to comply to style. Bump FreeBSD identifier to 1.16 to signal to which version we synchronised. Document thread-safety. Add include file protection wrapper. Add NO_PKGTOOLS to disable building of the pkg_* tools during world. Remove NO_PKGTOOLS wrapper, it existed in top-level Makefile already. Remove extraneous closing brace. .Fx -> .Dx. Fix two installworld mtree warnings. Rename the variable PROG to LDR to remove a warning message about Add SHF_TLS and STT_TLS to complete the ELF ABI for TLS. Expand e_type with OS and processor-specific ranges. Seriously expand e_machine. Expand e_machine per ELF ABI of Dec 2003. EM_ALPHA has been assigned number 41 nowadays, reflect this fact. Demarcate e_machine reserved ranges. Retire EM_486. Add ELFOSABI for OpenVMS, HP Non-Stop Kernel, and Amiga Research OS. Add SHN_LOOS, SHN_HIOS and SHN_XINDEX. Add SHT_INIT_ARRAY, SHT_FINI_ARRAY, SHT_PREINIT_ARRAY, SHT_GROUP and Add SHF_MERGE, SHF_STRINGS, SHF_INFO_LINK, SHF_LINK_ORDER, Add section group flags (GRP_*). Add STB_LOOS, STB_HIOS, STT_COMMON, STT_LOOS, STT_HIOS, STV_DEFAULT, Add PF_MASKOS and PF_MASKPROC and realign the comment section. Add NetBSD's nls(7). Reorder alphabetically. Remove part about wscons. This will need information about our console Get rid off extraneous spaces in function prototypes. Add sshd example, might be useful for jails. Add manual page. Add DIAGNOSTICS. Add EXAMPLES. Actually add a copyright name. Add SUSv3 information. Document the fact we do not conform to 1003.1-2004/SUSv3. Document that we do not conform to SUSv3. Document that dirname does conform to SUSv3. Remove part about 89 and 1989, we live in the 21st century now. Add DIAGNOSTICS. Use consistent wording, blank has been changed to empty (which is POSIX Document SUSv3 conformity. Use POSIX wording of the -R option, the previous one was really unclear Add -f to the non-standard list. Pull -h to its own synopsis line to avoid possible confusion of it being Take the -h use case separate to make it clearer. Remove trailing /. Synchronise with NetBSD v1.18: Add vendor ids for ATi and Philips. Update with the NetBSD code (which can include FreeBSD/OpenBSD changes): Update FreeBSD tag to what the source code has. Sync with NetBSD: Place && at the right place. Synchronise with NetBSD (some come from OpenBSD): Synchronise with NetBSD: Clean up manual page. Add DragonFly release numbers. Fix PROCDURE -> PROCEDURE. Merge from vendor branch BIND: Merge from vendor branch BINUTILS: Merge from vendor branch CVS: Fix PROCDURE -> PROCEDURE. Merge from vendor branch GCC: Merge from vendor branch NCURSES: Fix PROCDURE -> PROCEDURE. Fix PROCDURE -> PROCEDURE. Fix PROCDURE -> PROCEDURE. Fix PROCDURE -> PROCEDURE. Fix PROCDURE -> PROCEDURE. Use FreeBSD's HEAD tag. Merge from vendor branch TEXINFO: Add texinfo 4.8, appropriately stripped down. Switch to texinfo 4.8, which is needed for a lot of new texi files. Commit a missed change. fdl.texi is needed. Merge from vendor branch TEXINFO: Merge from vendor branch GROFF: Add groff 1.19.1, stripped down appropriately. Merge from vendor branch GROFF: Add groff 1.19.1, stripped down appropriately. Merge from vendor branch GROFF: Update and reorder. Remove mdoc.local, we need to make this truly local. Merge from vendor branch GROFF: Remove mdoc.local, we need to make this truly local. Taken from FreeBSD-HEAD: Retire old perl 5.00503 optional manpath and replace with a manpath to Update to groff 1.19.1. Get rid of the old texinfo. Get rid of the old groff. Add patch to add Dx macro definition to doc-syms. Use groff 1.19.1 supplied file. Culminate all our local changes into one file: Remove these files, we augment with mdoc.local what groff delivers us Fix for the error Get rid of xditview, since by default we do not have X installed, it makes Update to groff 1.19.2. Merge from vendor branch GROFF: Update to groff 1.19.2. Update to groff 1.19.2. Add pdfmark to the build. Unhook cvsbug from the build. Get rid of the Makefile. No need to prototype main(). Synchronise with FreeBSD: Move the termwidth declaration over to extern.h, where is make more sense Get rid of the third clause from the UCB license. Make len size_t, since strlen() returns a size_t, case i to size_t later Use warnx() instead of home-rolled fprintf() constructions. ls.1: Add y to SYNOPSIS line. Update FreeBSD Id. Synchronise with FreeBSD: Make sure err() does not use a NULL format string, this way we at least Synchronise with FreeBSD v1.65: Synchronise with FreeBSD: Fix long standing logic bug in basename() introduced in 1.4. Update FreeBSD Id and synchronise with FreeBSD: Move the Ids down to where they should be. Fix one last nit: compare the result of strlcpy() only to see if it is Rework dirname() with the same logic Joerg introduced for basename(): Restore the MAXPATHLEN comparison in basename.c, don't forget that strlcpy() Welcome to 2005: Partial synch with FreeBSD v1.74: Add inttypes.h for intmax_t. Partial synch with FreeBSD 1.74: Bump FreeBSD Id. Synchronise with FreeBSD: Synchronise with FreeBSD: Synchronise with FreeBSD up to and including v1.86. Synchronise with NetBSD: Reflect type change and remove third clause. Get rid of the third clause. Fix broken comment. Get rid of the third clause where we can. Bump OpenBSD Id. Synchronise with v1.73: Synchronise with FreeBSD: Fix strmode()'s parameters to reflect the reality it had been in for a long Pull in sys/types.h if it has not been parsed yet. Actually add a manual page for xe(4). Actually hook up ipw. Document and order the wlan devices. Fix the function declaration. Enable wide character support in ncurses, since we have it, better make Fix accidental reversal of assignment for suffix/suffixlen. Revert last commit for two reasons: Add the wide character files for ncurses. Add usr.bin/stat to bootstrap-tools: Use echo instead of ls to test for files, change test logic slightly for Replace use of ls with echo and tr. Get rid of ls and use basic sh/echo constructs. Remove bin/ls from the bootstrap-tools. getchar() is void in its prototype. Joe Talbott (11): * testing Make vkernel compile with 'options SMP'. Most functions are stubs that Fix files that included the posix scheduling headers that were merged earlier. Add break after parsing the -n option. Add usched_set() manpage. Add support to vkernel for locking virtual CPUs to real CPUs. Let the user know if they attempt to start a vkernel with vm.vkernel_enable Make mbuf allocator statistics SMP safe. Fix conditional so that the linux module is loaded. Add support for Cisco-Linksys WUSB54GC which has a different vendor ID Test commit Joerg Sonnenberger (2002): Move the FreeBSD 2.2 and 3.x PCI compatibility code into pci_compat.c and let it Add/uncondionalize the sixt argument of set_memory_offset to ease the migration Add card_if.h dependency Split off the PCCARD specific driver parts of fd and sio and remove last Add hw.firewire.sbp.tags to control tagging for SBP devices. The default Add PCIBUS to nexus Add rman_get_device and rman_get_size, use macros in nexus.c Make hw.firewire.sbp.tags tunable Add black hole device for system PnP IDs Fix indentation to tabs, no functional changes Drop chip driver and merge the functionality into pci_probe_nomatch. Sync pci_cfgreg.c with FreeBSD 5, rev. 1.101. This makes the PCI interrupt Fix linker issues with /usr/libexec/elf/ld not using rtld's search path Add pcib_if.m Add pcib interface methods. Remove HOSE support which should be implemented in the bus function when Use nexus_pcib_read_config instead of pci_cfgread. Do some cleanup. Add comment for nexus_pcib_write_config. Replace pci_cfgread and pci_cfgwrite with PCIB_READ_CONFIG and Fix compile errors introduced with last commit Fix PCI deadlock on boot Add DragonFly specific headers. Add __DragonFly__ and correct the specs. This is Fred, not Beastie Add a dummy dependency for each target make(1) is called with when Add lib/gcc3/csu Fix tconfig.h dependency Add lib/gcc3/libgcc and lib/gcc3/libgcc_r Import libstdc++ from GCC 3.3.3-pre 20031106. Merge from vendor branch LIBSTDC++: Add directories for lib/gcc3/libstdc++ Conditionalize use of vfscanf Implement __cxa_atexit and __cxa_finalize as requested by the cross-platform Add C++ runtime libraries for gcc3 Fix missing .. from last commit Rename GCCVER to CCVER and prepend gcc to the former values Clean multiline string literal for gcc3 GCC 3.3 doesn't support #pragma weak A=B, when A is defined. This patch Fix multiline string literals Fix more multiline string literals Fix compilation with GCC 3.3 and a few warnings Fix GCC 3.3 compilation and style(9) Include ${.CURDIR} before bin/ls to get the right extern.h Use C99 syntax for variadic macros Add -I{.CURDIR} to CFLAGS Add missing \ to string Add -I${.CURDIR} to Makefile Remove bogus sys_nerr defines Fix prehistoric C Fix prehistoric C Remove some prehistoric C crap Add support for ${BINUTILSVER} to objformat. This defaults to gcc2 Adjust program names by removing the additional 3 Remove generated file from repo Force make to export CCVER for buildworld Both cc and cc3 were built using CCVER=$(CCVER) for certain compiler path. GCC3 doesn't inspect #line to allow inclusion relative to the source Add -I${BIND_DIR}/bin/nslookup Fix prehistoric C The __cxa_atext/__cxa_finalize patch is based on FreeBSD OR bin/59552 Remove bogus -fstrict-prototypes which isn't support for gcc3. Use new style variadic functions Add GCC3 to buildworld. Add -ffreestanding to CFLAGS for gcc3 Split multiline string literal Don't use non-existing DragonFly override ports for ports using Cleanup the fix for dependency handling. This explicitly checks wether Replace K&R style function declarations with ANSI style one. Rename bsd.cpu.mk into bsd.cpu.gcc2.mk and add the proper Makefile magic Always re-export CCVER Fix make complaining about CCVER being not defined Always include bsd.init.mk to fix CPUTYPE evaluation Remove C++ from libc_r. Use __attribute__((constructor)) instead. Really remove C++ from libc_r To let the crazy Perl Makefiles work, add the support for ../Makefile.sub Rename Makefile.inc to Makefile.sub Rename Makefile.in to Makefile.sub Remove Makefile.inc Rename Makefile.inc to Makefile.sub Add support for the AMD 8111 chipset Change functin definitions to conform to style(9) Fix various warnings and other glitches. Move binutils from /usr/libexec/gcc2/{aout,elf} to Add getopt_long from NetBSD Build lwkt_process_ipiq_frame only for the kernel, since Fix various warnings. Update style(9) to reflect current code practise. Sync DragonFly and FreeBSD-current's FireWire driver. Add missing sbp.h to complete the firewire sync Second part of the firewire sync. Add defined(__DragonFly__) or Fix bug when doing backquote expansion. Allows option to be specified on the command line when mount with -a. De-K&R-ify source, remove register keywords. Adjust infrastructure for NEWCARD Add a tunable hw.pci_disable_bios_route to work around broken PCI-BIOSes. Add lost -D__FreeBSD__ Initial backport of NEWCARD from FreeBSD 5. Allow choosing different GCC versions for buildworld and buildkernel as Fix gcc3 compilation Add defined(__FreeBSD__) and defined(__DragonFly__) where appropiriate. Add defined(__FreeBSD__) and defined(__DragonFly__) where appropriate Fix gcc3 compilation Sync with FreeBSD's pccarddevs __FreeBSD__ to __DragonFly__ Probe via CIS lookup Add accessor for CIS4 and change some functions to take const char* arguments Add __DragonFly__ Add __DragonFly__ Always include net/bpf.h Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Add __DragonFly__ Fix wrong conditional from last commit Add __DragonFly__ Fix broken string literals Fix # style comment in file using the C prepocessor Add __DragonFly__ Add __DragonFly__ Sync if_ed with FreeBSD current Fix warnings about casting const pointers. Add device_is_attached to allow a driver to check wether a given device Add PCIR_BAR and PCIR_BARS for FreeBSD 5 compatibility Add pci_get_ether and pci_set_ether for FreeBSD 5 compatibility Add BUS_DMA_ZERO flag to bus_dmamem_alloc. Install getopt_long.3 pmap_zero_page expects an address, not a page number Fix typo. Add proper match routines for PCCARD nics. Remove duplicate line for if_ray Add PCCARD match routines sio and the disk drivers. Add PCCARD match function to ata. Code taken from FreeBSD. Remove some unneeded #include PCCARD has a central device database in bus/pccard/pccarddevs, add one Add generated pcidevs files. Fix a small typo in devlist2h.awk. Add supfile to fetch only dfports INTR_TYPE_AV is used by FreeBSD 5 code and was defined to INTR_TYPE_TTY Fix compilation with -fno-common Certain port Makefiles expect variables like ARCH or HAVE_GNOME to be set Add missing make_dev DFports cleanup part(1) Split off the PCI-PCI bridge and the PCI-ISA bridge code from Add support for the kernel printf conversion specifiers %b, %D and %ry. Use -fformat-extensions when building kernel with GCC2. Hide ISA compatibility layer under COMPAT_OLDISA Use ovbcopy instead of bcopy to match prototyp Fix some warnings Fix a typo and include <sys/random.h> Remove unused static variable Add prototype for bootpc_init Remove unused static declarations Fix format string Fix argument order for snprintf, the size is the second argument Fix format string Fix spurious warning about ANSI trigraphs Conditionalize filll_io and filll, they are only used with VGA_NO_MODE_CHANGE #ifdef0 pst_shutdown, it is not used edquota(8) should honour MAXLOGNAME. Remove mixerctl script from NetBSD and add a replacing mixer script. Style(9) cleanup. Remove K&R style prototyps and use __BEGIN_DECLS/i Fix use after free / double free bugs. Return an error in error conditions. Adjust mixer script to depend on mixer_enable="YES" and default to NO. Cleanup emujoy_pci_probe Don't use parameter names for kernel prototyps Remove parameter names. Remove the entry for pccard and allow src/include/Makefile to properly Remove parameter names, adjust white spaces in prototyps and remove Remove unused and undocumented strhash files. Add missing return from last commit Use ifp->xname instead of if_name(ifp) There are historically two families of fixed size integers, u_intX_t and Add bfe(4) support from FreeBSD. De-K&R-ify function prototyps and remove register keyword. Merge FreeBSD rev. 1.8: Adjust indentation, use uint32_t and line up comments. Make subr_bus.c more consistent with regard to style(9) and itself. Use M_WAITOK instead of M_WAIT to get memory. We have /etc/rc.subr, don't add the dependency The slab allocator has been in for while now. Change the printf for invalid Add patch infrastructure for contrib/ and similiar directories. Propolice for GCC 3.3 based on: From FreeBSD: Fix panic in acd_report_key when ai->format==DVD_INVALIDATE_AGID. Handle failure in atapi_queue_cmd correctly While converting ATA to use MPIPE allocations, ata_dmaalloc was changed Remove a debug printf added with the last commit. Use local cpu tcbinfo Initialize the interface name for if_de Initialize all fields in MALLOC_DEFINE and VFS_SET to fix warnings. Add a short cut for DVD_INVALIDATE_AGID to simplify the rest Think before commit and remove some more cruft The free(9) implementation based on the slab allocator doesn't handle Merge the kernel part of UDF support from FreeBSD 5. Add userland UDF support based on mount_cd9660 Merge from FreeBSD 5: Add convient functions for the bus interface: child_present, Add BPF_TAP and BPF_MTAB macros from FreeBSD Fix warning about missing prototyp for psignal Remove macro definitions for BPF_MTAP Remove unused BSDI I4B interface BPF has been in the kernel for ages and is supported by all NICs but I4B. Remove BPF_MTAP definition Some drivers depend on the link layer address in ac_enaddr. Add a new function ether_ifattach_bpf which can be used by NICs not using Merge FreeBSD's rev. 1.81: Add support for building dependent modules automatically by Cleanup sis(4): Add default case of error = EINVAL to ether_ioctl Add device IDs of BCM5788, BCM5901 and BCM5901A2. Fix some spelling mistakes. Change to PCI_VENDOR_DELL Replace the Perl scripts makewhatis(1), makewhatis.local(8) and catman(1) Add functionality to binutils 2.14's ld to scan /var/run/ld-elf.so.hints kern_sysctl.c Remove the old locking based on memory flags by lockmgr based code. Revert last commit. This should not have happened. Add SI_SUB_LOCK as sysinit priority for the initialisation of tokens and M_NOWAIT => M_INTWAIT conversion. This subsystems are way too crucial to Remove unused obsolete drivers. Fix warning about unused variable Add the "struct ucred *" argument to the remaining nic ioctls in LINT. KObj extension stage I/III KObj extension stage II/III Convert sis(4) from vtophys to busdma. Allocate the DMA segment array in bus_dma_tag_create instead of using a Correct C++ header handling for gcc2 and lex. Readd _G_config.h and the missing std headers. This brings C++ back to where Since GCC 2.95.4 is known to produce bad code for higher optimization Adjust the C++ preprocessor to include /usr/include/c++ by default for Add support for AC'97 codec of the AMD-8111 chipset. This is _SYS_XIO_H, not _SYS_UIO_H. nawk => awk Explicitly build the boot2.c without propolice. We can't really handle Remove unit from sis_softc and use device_printf and if_printf instead of KObj extension stage IIIa/III Don't print the recording sources to stderr, the manpage doesn't indicate Do some style(9) cleanups and make add static. Fix output format of "mixer -s", it is supposed to be =rec rdev1 rdev2 ... Use "mixer -s" for saving the mixer settings and adjust messages since we KObj extension stage IIIb/III Move IFF_PROMISC and IFF_POLLING from ifnet.ipending to ifnet.if_flags, Nuke unused fields in struct ifnet, if_done and if_poll_* hasn't been used for() ==> TAILQ_FOREACH Merge changes from FreeBSD 5: In contrast to FreeBSD 4 and 5, our slab allocator does hand out cross-page m_tag_alloc illegally passed the mbuf flags to malloc, hitting the Partial sync with kernel to get libcaps compilable again. Fix bsd.port.subdir.mk by adding the normal environment hacks Small style fix Sync libcr with libc. Conditionalize accept_filter variable on defined(INET). Move the Plug'n'Play BIOS support into a separate file. This is included POSIX lock resource limit part 3/4 Fix races in lf_getlock and lf_clearlock when waiting for memory. Serialize access to lockf via pool tokens. Switch to the callout interface and rename the associated entry to sis_timer, Add ifmedia infrastructure for the generic IEEE 802.11 support. Two more defines from FreeBSD. Fix panic due to the way change_ruid handles the userinfo. Merge rev 1.10 from FreeBSD: Fix two bugs in the lockf code. The first one is a missing reinitialization Sync em(4) with FreeBSD current. Most important is the initial bus DMA support. Add the lockf regression test from NetBSD, slightly modified to test - remove em_adapter_list, it was not used for anything beside adding and Readd em_read_reg_io and em_write_reg_io for workarounds in various Add PCI IDs for i865 agpgart support. Temporary switch cc3 to the old stabs debugging format to unbreak gdb5. Update bktr(4) to FreeBSD current's version. This most importantly Add dev/video/bktr and dev/video/meteor to the header file list. Use struct thread for kernel threads, not struct proc. Explicitly cast away the volatile for conversions and argument passings. Explicitly cast-away volatile since it should be save here. Initialize the magic cookie using real numbers instead of a multi-character Fix some const warnings The const call is linted, use proper cast to silence GCC Use __DECONST to silence GCC. Make some private routines static. Use volatile and __DEVOLATILE to silence gcc warnings. set_lapic_isrloc depends on APIC_IO for the prototype to exist, it isn't Improve the way error message from ALART are printed. Include ns.h to get prototyp for ns_cksum Hide unused function under #ifdef SMP Hide unused functions to silence GCC Don't cast away the const before dereferencing. Move extern declaration to file scope to fix warning Change pr_output's signature to take two fixed arguments and possible Make pr_domain and pr_usrreqs pointers to const. The general stack is not Make pr_input use variadic arguments for anything but the first mbuf. Continue cleaning em(4). Fix a small bug in the last commit. ether_ifdetach has to be called em(4) assumes that bus_dmamap_destroy of bus_dmamap_load_mbuf maps Begin implementing a -liberty replacement for binutils and GCC under Fix the warranty, this is not UCB code Add implemenation of lrealpath. This works like realpath, but returns Add Makefile support for host programs (.nx) just like we support normal Import BSD-licensed crtbegin/crtend support. Switch from GCC-version specific crtbegin/crtend code to the version Add the old CSU files to list of file to be deleted and keep the new ones. Build infrastructure for GCC 3.4 Add directory entries for GCC 3.4. Add CCVER=gcc34 support to bsd.cpu.mk. Also add the magic for AMD64 support Nuke lib/csu/i386 (a.out support) and copy lib/csu/i386-elf there. Really use the host compiler in bsd.hostprog.mk Fix warning Don't whine about malloc/realloc "poising" the code for YACC files. The insn-conditions.c generated by stock GCC does some bad premature Include ProPolice suport for GCC 3.4. Include ProPolice suport for GCC 3.4. First stage in cleaning the built-in pathes of gcc. Adapted patch from Remove the evil inline ==> rpcgen_inline CPP hack, rename the variable Fix GCC 3.4 build Fix compilation "Label at end of compound statement" and some missing Fix GCC 3.4 build. Fix GCC 3.4 build Redo the ProPolice patches, there were partly broken. Manually recurse into ../cc_prepend for depend to workaround Use our specs, not FreeBSD's. Always set the _CPUCFLAGS in bsd.cpu.gcc2.mk to some known, safe Define __DragonFly_cc_version for CPP Don't include _CPUCFLAGS since the host compiler (aka NXCC) might not Export HOST_CCVER via environment to fixate it to either the specified Explicitly recurse into gnu/usr.bin/cc34/cc_prep for depend to ensure For cc_tools, recurse into cc_prep when building dependencies to get Add the STABS default output hack for GCC 3.4 too. make => ${MAKE} make ==> ${MAKE} ${.TARGET} Merge from vendor branch GCC: This is GCC 3.4, not 3.3 Add missing WI_UNLOCK Fix a race in the initialisation of struct lock by moving the TAILQ_INITs Print the correct list in _lf_print_lock procfs_validfile does have a public prototyp, but doesn't seemed to be used. Remove cast as lvalue Remove cast as lvalue Use const char * for string argument of _assert_sbuf_integrity and _assert_sbuf_state Remove invalid tokens after #endif Add a default initializer for data_sds. The warning from GCC is not correct, Remove invalid tokens after #endif Add the support for BSD format specifiers. This was adopted from the Announce MAC address in ether_ifattach, not in each NIC indepently. Add common functions for computing the Ethernet CRC on arbitrary length Add ETHER_ALIGN for portability. Include the header files from the current source tree and not from /usr/src. Add IFCAP_POLLING for per-interface polling support. Add re(4) as kernel module. After some feedback, this will be added to the Add support for setting polling on a per-interface base. Add re(4) to the list of manpages. Update the list of NICs supporting Welcome BPF in the 21st century and remove all the pre-ANSI C, BSD < 1991 Add per-device polling support. Add PDEBUG call for device_shutdown. In lf_wakeup, once we got a range embedded in the unlocked range, Stop using if_dname, use if_printf or ifp->if_xname instead. Don't init sc->re_timer twice. Fix grammatik error. Don't return 0 from rl_probe, because rl(4) is not a best match for Add if_broadcastaddr to struct ifnet to hold the link layer broadcast address. Add struct ucred * argument to ng_fec_ioctl Add llc_snap shortcut. Add handling of patches to the module framework. Remove unused variable _PATCHES. Unify the input handling of the low-level network stack by introducing Sync with FreeBSD 5-CURRENT. Regenerate. Add RC4 to the crypto module / device. This will be used by the generic Instead of casting the function, cast the argument to the (correct) type. Comment out extra token at end of #endif. Remove usage of NTOHS / NTOHL / HTONS / HTONL. Last commit changed a NTOHL to ntohs, correct this. Extend the patch framework to handle non-compilable files. E.g. for Add macro to test for broadcast / multicast Ethernet addresses. Don't cast lvalues. Don't use cast as lvalues. Import generic 802.11 layer. Add 802.11 include directory Add 802.11 include directory - turn a strcpy into a strlcpy to avoid overflow Make raycontrol(8) WARNS=6 safe by adding const and fixing a signed/unsigned Use netproto/802_11 includes, instead of net/if_ieee80211.h Use netproto/802_11 includes instead of net/if_ieee80211.h. Use netproto/802_11 includes instead of net/if_ieee80211.h Use netproto/802_11 includes instead of net/if_ieee80211.h and Use netproto/802_11 includes instead of net/if_ieee80211.h. Refer to netproto/802_11/ieee80211{.h,_ioctl.h} instead of net/if_ieee80211.h. Remove now obsolete header. Sync with FreeBSD CURRENT. Add two more 802.11 media types. Add IF_QLEN and the ALTQ macros. This are only the lock-free versions with NTOHL(x) ==> x = ntohl(x) NTOHL / HTONL removal. Sync with FreeBSD CURRENT (most white space cleanup and ordering). Change (almost) all references to tqh_first and tqe_next and tqe_prev Forced commit to annotate the (unrelated) changes from the last commit. Release to correct ressource in re_detach, this is PCI_LOIO now. Add re(4) to GENERIC. Add re(4) to LINT as well. Yet another hack for x11/xorg-clients. The EISA attachment of vx does have a softc, export it's size correctly. Add LIST_FOREACH_MUTABLE which works like TAILQ_FOREACH_MUTABLE. Add a description for LIST_FOREACH_MUTABLE and TAILQ_FOREACH_MUTABLE. Add MODULE_VERSION(pci, 1), e.g. agp(4) can't be loaded as module otherwise. IOCTL mapping layer Part I/II - remove '?' from getopt switch case Add DFOSVERSION for ports to check for DragonFly and allow them to handle - remove prototype for main IOCTL mapping layer Part II/II Make the addr parameter to kernacc and useracc const. - WARNS ?= 6 clean Add strotonum(3) into the !ANSI_SOURCE && !_POSIX_SOURCE && !__STRICT_ANSI Minor style changes. - use const for file names, static for local functions Fix the fprintf statement for overlong domainnames. Make libftpio WARNS=6 clean. More constify. Fix various buffer overflows. In cmd(), after the vsnprintf is a strcat done to append a newline. Make this WARNS?=6 clean by explicitly using __DECONST for the write Add implemenation of splay tree and red-black tree. Use sys/param.h instead of sys/types.h for endian macros. - use WARNS?= 6 instead of a hard-wired list. -pedantic doesn't really work sys/types.h ==> sys/param.h for ntohl Fix spurious warning Hide prototyp for loginit if ACULOG is false. Fix prototype for signal handlers sys/types.h ==> sys/param.h for endian macros sys/types.h ==> sys/param.h for endian macros - include sys/param.h for ntohl - include sys/param.h for endian macros - include sys/param.h for endian macros - include sys/param.h for endian macros - include sys/param.h for endian macros sys/types.h ==> sys/param.h for endian macros - include sys/param.h for endian macros sys/types.h ==> sys/param.h for endian macros rev 1.35: Remove ASR_MEASURE_PERFORMANCE, it doesn't work anyway. Improve the endian support for DragonFly by providing functions to convert Remove pre-FreeBSD4 compability code. Use sys/types.h here, since we don't want to restrict ourselves to the safe Fix compilation of !PCI config. Fix linker set creation for GCC 3.4 with -funit-at-a-time. Add support for Nforce onboard ethernet. This is the content of Fix the way warnings are printed. Use warnx(3) if the message is provided, Remove the unit included in the softc, it is not used beside the call to Create the dma maps before allocating the memory. Use WAITOK for the Save current version of wi(4) as owi before switching to generic 802.11 Add const for argument of argmatch to fix warnings. Import the new wi(4) driver based on the generic 802.11 layer. Minor style cleanups. Sync driver: Add wlan and crypto for the new wi driver. Sync with FreeBSD. Most importantly, add handling for -e, -H and -F Make mark/unmark static. Use getprogname() instead of __progname. Make edstop and writeback static. Remove !TIOCSTI code and TIOCEXT conditionals. This have been in the Remove void casts of function return values. Remove empty lines for missing Transform the PCI probe ifs into the normal loop. Merge the controller Don't sleep and commit. Fix the last commit. Fix some warnings in the code about non-ISO C prototypes and void * Add GID_MAX and UID_MAX. From FreeBSD 5: From FreeBSD 5: Minor style cleanup. Use err(3) since we are interested in the error message. Remove a duplicated PCIR_BAR definition. Remove the unused PACKET_TAG definitions and document the structures used Don't append '\n\0' to the return value of clnt_sperror if the string was Make clnt_create take both host and proto as const char * arguments. - statify functions IP6A_SWAP is never set in our code base and this fragment doesn't even timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Convert axe(4) to use splimp/splx instead of mutex calls, use the timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Add axe(4). timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Remove dead code which depends on timeout interface._* Convert timeout ==> callout_*. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* SUS3 specified that netinet/in.h provides ntohl and may provide all the The prefered location of the byteorder functions is arpa/inet.h. This timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Remove unused consumer of timeout timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Don't use dev/acpia/* includes for the ACPI5 infrastructure. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Replace the timeout based busy-loop with a DELAY(1000) based busy-loop. timeout/untimeout ==> callout_* if_clone_event should take a struct if_clone *, not a struct ifnet *. timeout/untimeout ==> callout_* - ISO C cleanup Move the timer declaration a bit up. I don't know why GCC 3.4 works and timeout/untimeout ==> callout_* Split DN_NEXT into a version with and without cast. For the left side usages, timeout/untimeout ==> callout_*, even though this is currently not used. Don't include arpa/inet.h, which needs certain struct to be defined. Remove unused defines timeout/untimeout ==> callout_* Don't include the PCI parts until the our system has involved to support timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Fix some warnings / typos timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Change the FreeBSD 5 jail sysctls to the correct DragonFly locations. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Fix typo. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* for p->p_ithandle It is unlikely that NetBSD wants to take this code back or that DF will timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Move the callout init below the softc allocation. *sigh* timeout/untimeout ==> callout_* Use ioctl_map_range instead of ioctl_map_cmd, as required by the mapping API. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Remove cast of lvalues. Remove extra tokens at end of #undef. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Change getgrouplist(3) to take gid_t arguments for the groups. The CIS vendor and product string routines return the string directly, not Kernel part of PF WARNS=6 cleanes Also define global uid_t and gid_t variables and remove some warnings. Install PF header files. Add various ICMP defines used by PF. Add a new option "TIMER_USE_1", which allows switching the primary heart Use libcaps_free_stack instead of kmem_free for userland Sync defines with sys/globaldata.h. Include machine/cpu.h for userland to get clear_lwkt_resched. PF uses -1 as rule number for the default entry. To make the value more Uesrland part of PF Use const char* for tcpstates Add pidfile(3). s/pidfile/my_pidfile/ This should read const char *, not char char *. Use the protected names for BYTE_ORDER and co, the others are not defined Typo. Change the conditionals to use the protected _BYTE_ORDER defines. BYTE_ORDER ==> _BYTE_ORDER Fix typo Add GPL-free patch. This is the extended version of Larry Wall's original From FreeBSD: Allow ip_output to be called with rt=NULL by making the FAST_IPSEC code - make process() static and take const arguments tsptype is an array of character constants, declare it as such. EVENTHANDLER_REGISTER uses the name of the event as string and therefore We now have pidfile in libutil, update kerberos5 accordingly. Make the BSD patch WARNS=6 clean Don't use patch -b, the flag has different meanings for BSD and GNU patch. Switch patch(1) from GNU to BSD version. Fix the setup for initial processing. Always use argv[0] and correctly Add CTASSERT, a facility for simple compile time assertions. This is useful Fix the code for the nxt != 0 case and use the assembler backend. Fix locations of PF helpers. Add some more functions for -liberty and make the library WARNS=6 clean. GCC 3.4 doesn't check for NULL format-strings for __attribute__((printf)) Import zlib-1.2.2 using new-style contrib handling. Merge from vendor branch ZLIB: Import zlib-1.2.2 using new-style contrib handling. Remove the GCC 3.3 build system. Use SCRIPTS instead of a beforeinstall target, it's cleaner. I have to specify SCRIPTSNAME too, otherwise bsd.prog.mk strips the extension. Welcome GNU bc to the Attic. Fix the references to libz sources scattered over the tree, make them Import GNU readline 5.0. Import GNU readline 5.0. Merge from vendor branch READLINE: Build framework for GDB 6.2.1. Forced commit to add missing annotations. Hide prototype for tilde_expand, it's incorrect now. Add the info pages for GDB 6. The inc-hist.texinfo and rluser.texinfo are Complete doc infrastructure. Merge from vendor branch GDB: Import GDB 6.2.1 as obtained from without the files in Good bye, GNU awk. RIP - fix order problem in Makefile, Makefile.${ARCH} has to be included before RIP GNU patch. Remove the GCC 3.3 library build framework. Remove GNU-CSU Makefile Add gzip based on libz. This is faster for decompression and yields Add support to specify the default compiler and binutil version in gcc3 ==> gcc34 Change the way binutils versions are handled. Depending on CCVER, Change the variable from CCVER_DEFAULT to OBJFORMAT_BUILTIN_CCVER_DEFAULT Define PRId64 as "lld" for older systems laking support for it. Switch to libz-based gzip. Remove the old one-true-awk. We have to support uncompress by default, because zcat is used e.g. Move libstdc++3 into the attic. It can be removed from the repo later. RIP gzip, we found a nicer playmate. Remove a.out support from base. Bumb WARNS to 6 for ldconfig. Fix a race condition in detach path of Ethernet devices. Most current We don't have lkm, use aftermountlkm instead. Small speedups: Remove references to sysinstall, it vanished quite some time ago. Fix the column width for IPv6 to correctly align the fields. I don't know why I added use_bpf.h, it isn't needed of course. Define IOV_MAX for userland too and make it available via sysctl and Use the speedup patches, too Import OpenNTPD 3.6 (the OpenBSD version, not the portable). Merge from vendor branch NTPD: Import OpenNTPD 3.6 (the OpenBSD version, not the portable). Switch to gdb 6.2.1 and wait for new problem reports. After another look at the current changes, switch directly to the in-tree Merge from vendor branch NTPD: After another look at the current changes, switch directly to the in-tree Add SA_LEN patch for client.c, too Close .if. Install GNU tar as gtar now in preparation for adding bsdtar. Import libarchive and bsdtar. The default tar (/usr/bin/tar) can be choosen Merge from vendor branch BSDTAR: Merge from vendor branch LIBARCHIVE: Import libarchive and bsdtar. The default tar (/usr/bin/tar) can be choosen Import libarchive and bsdtar. The default tar (/usr/bin/tar) can be choosen Add libarchive to prebuilt libraries. - style(9) cleanup Add _ntp to the user database, correct the path to nologin for proxy Initialize pseudointr_ch in fdc. Change zconf.h and zlib.h to use underscored protection macros. Fix systat -netstat and remove the dependency on KVM for this part. Switch to OpenNTPD by default. For the moment, the documentation is Remove ntpdate from FILES list as well. Show the CPU used by the multi-threaded network stack to handle a socket. Fix the condition under which yacc -o is used. This should fix the parallel Remove left-over '{'. Add solib-legacy.c, it is needed for proper attach support. Merge from vendor branch GDB: Add solib-legacy.c, it is needed for proper attach support. If the first slot is empty, don't continue scanning. This fixes hangs Fix an endian bug in pflog. The DragonFly version of PF uses the normal -W is included in CFLAGS already. -Wall -Wstrict-prototypes already in CFLAGS. Remove CFLAGS?= 2 Remove the dependency on either asprintf or va_copy. This will be included Fix a lhs-cast by casting first to unsigned char and then to the Sync with FreeBSD. Most importantly, this removes the need for perl. Remove ptx, it isn't maintained and hasn't been used by base for years. The code and sources here haven't been used since FreeBSD 2.x, nuke Add preprocessor handling for newer processor, they should get the better Sync with FreeBSD. This removes the need for perl. Remove the old release infrastruture. Certain parts e.g. sysinstall or Cleanup send-pr and ptx on next update. Add some more hacks for dfport handling, to allow make index to mostly - use STAILQ, instead of hand grown implementation Fix a bug in parsefmt where a string still referenced would be freed. Override USE_GCC=3.4, we want to use our system compiler in that case. Don't install the undocumented scriptdump Perl script. Replace spkrtest script with a shell version. Always use the width field from the varent, not the original width. Use the YACC source for c-exp.y and f-exp.y, not the pre-built Normally we want to warn if the local IP address is used by a different Change the name entry in struct nlist to const char *. Make pstat WARNS=6 clean. Revert a part of the last commit. The changes to use sysctl interface Add a new system library, libkinfo. It is intended to replace libkvm for Include Makefile Add kinfo_get_vfs_bufspace(). Remove some junk from Makefile. Use kinfo_get_vfs_bufspace instead of kvm. Replace lvalue cast. Change the include order of archive_string.h and archive_prviate.h. Comment out extra tokens after #else and #endif. Add the basic of libkcore. Switch pstat to use kcore/kinfo backing, Fix parameters for comparision function. Fix warnings about yyerror, yylex and yy_unput Don't define the .BEGIN with the override message twice for modules Remove some !FreeBSD compat code, which produces warnings on DragonFly. Fix a stupid fault in the last condition. The code should be skipped, if Reorganise the nlist handling a bit, use an enum instead of preprocessor Missing return-value check for malloc. Cover the verbosity code in {}, otherwise the newline is printed everytime. Allow "make index" to actually by fixing lots of smaller glitches all over Convert some broken cases where Error is called, but we try to continue, Set DFPORTSDIR. It will be used by "make index" soon. main.c: 1.81->1.82 author: ru dir.c: 1.31->1.32 Fix a bug that leads to a crash when binat rules of the form job.h: 1.20->1.21 parse.c: 1.53->1.54 make.1: 1.67->1.68 Makefile:1.30->1.31 Don't include stddef.h from the kernel. Remove struct ipprotosw. It's identical to protosw, so use the generic Switch a strncpy into a strlcpy. I'm not sure why this triggers propolice, Makefile: 1.31->1.32 Makefile: 1.32->1.33 make.1: 1.77->1.78 job.c:1.52->1.53 *** empty log message *** Merge from vendor branch NTPD: *** empty log message *** Force commited to annotate missing log message. Merge from vendor branch NTPD: Force commited to annotate missing log message. Change the default for ntpd back to -s, the bug which triggered this Move ntp.org's ntp into the attic. Add rdate(8). This supports both SNTP(RFC 2030) and RFC 868. var.c: 1.42->1.43 var.c: 1.44->1.45 cond.c:1.27->1.28 compat.c:1.38->1.39 job.c: 1.51->1.52 Don't read userland pointers directly, copy them first into kernel land Remove unused tcpdump sources. Make rp(4) compilable again. Don't even think about unloading this stuff. Replace div_pcblist / rip_pcblist / udp_pcblist with in_pcblist_global. Return retval if the second sysctl failed, not NULL. Don't include the "Hello, world" example in libz. Mark the following stuff as depricated: Implement generation counters as (at least) 64 bit counters. The increment Add dependency for libcrypto to dc(1). Don't build bc and dc if Merge from vendor branch NTPD: Sync with OpenBSD. Add an example ntpd.conf. This is not installed by default, because Remove generic generating counting. I'm going to use a different approach Add weak fake pthread functions, which always fail. This is needed to Remove the userland visible part of the socket generation counting. Consistently use /:/boot:/modules as search path in the loader, the kernel USER_LDT has been removed ages ago. Remove unecessary range check. start and num are unsigned anyway and Define __arysize for FreeBSD 4 buildworlds. Remove cruft for GCC 3.3 - Update GCC to version 3.4.3. Merge from vendor branch GCC: - Update GCC to version 3.4.3. Don't even include the GDB build framework. Install the GCC 2.95 main info pages as gcc2.info and cpp2.info. Fix a regression with GCC 3.4.3 by using __packed instead of mode(byte). Change type of len to size_t. Use size_t for len. Fix NO_OBJC knob. Depend on _KERNEL or _KERNEL_STRUCTURES. Remove support for ancient (FreeBSD) kernel, which don't set profrate. Trust your intuition. If something feels wrong, it often is wrong. Add digi driver to simplify testing. This should replace dgb. Add new structures for exporting the cputime statistics via 64 bit counters Remove my local patch again, it was still not meant to be commited. Check that the ifnet_addrs entry is actually used before dereferencing it. Ignore ENOENT when fetching the interface MIB. This can happen for Don't build the gdb-related documents in binutils-2.15. Mark old file as dead. Replace sockstat(1) Perl script with the C version from FreeBSD 5. Switch from binutils 2.14 to binutils 2.15. Update isc-dhcp to 3.0.2rc3 using patch infrastructure. Update isc-dhcp to 3.0.2rc3 using patch infrastructure. Merge from vendor branch DHCP: RIP Don't install old GDB documentation. Add ISO C99's _Exit, which is identical to _exit. Merge FreeBSD rev 1.70: WARNS=6 safeness: Mark binutils-2.14 as dead. Move sa_X macros under _KERNEL protection for now. Unhook Perl from build. Add _DIAGASSERT macro for library internal usage. This is not active by style(9) cleanup. Don't clobber match_str in buildmatch by strdup'ing the string internally. ANSIfy. Fix various warnings. Statify. It's WARNS?=6, not WARNS=6. Don't create binutils 2.14 and perl directories. Remove perl and RIP Perl. Move short option parsing into its own function. Constify the local char * Move the handling of '--' as argument into getopt_internal. Add a parameter Add support for getopt_long_only. It allows using long options with a single Fix two small bugs in getopt_long_only handling: Better diagnostic for getopt_long_only. Remove wx(4). It's been superseded by em(4). Remove wx(4) man page as well. Remove documentation of NOPERL, it's the default now. Don't create an Remove perl's man path. Add local_syms script. Nice for cleaning the kernel namespace. Add splitpatch. Mark OpenSSL 0.9.7d as dead. Make newkey WARNS=6 clean. WARNS=6 cleaness. Switch chkey/newkey to use libcrypto instead of libmp for the internal Add forgotten cast back. This code depends on BASE being short and Remove some debugging printfs. Fix an error message. Remove another Perl left over. Convert to keyserv, telnetd and telnet to libcrypto's BIGNUM Deorbit libgmp. Remove LIBGMP and LIBMP. Add gmp handling to upgrade_etc. typo Add default if_re_load line. Don't write the name of the sysctl to a temporary variable and truncate Correctly reset place in getopt_long_only the next time it is used. Now that we have dhcpd and dhcrelay in base, fix the inherited rcNG scripts RIP acpica-unix-20031203. Explicitly initialize fp to NULL. If sendfile is called on a non-socket, Don't include the kvm backend, it's not really working anyway. Add "proc" command. Initial jail support for varsyms. Initial part of DEVICE_POLLING support for wi(4). Still some rough edges, Replace temporary allocation from alloca with malloc/free. strl* conversion. Instead of messing around with sprintf, use asprintf to do the allocation Back out part of rev 1.24. The intention the quoting backfires and bad Don't print the override warning for package-name. Support PORTDIRNAME and PKGORIGIN. WARNS=6. The __DECONST is save, because execv doesn't mess with the strings, Avoid possible copyright problems and add the copyright of the Use PCIB_ROUTE_INTERRUPT instead of PCI_ROUTE_INTERRUPT, the latter is Ignore zero length mbuf in bus_dmamap_load_mbuf. Remove left-over. From FreeBSD: Add a simple tool to generate a sequence of numbers, without all the Add missing 'by' in the license text of the man page. Adjust calling convention for idp_input, idp_output and spp_input to match Add forward declaration of struct vnode to fix compiler warning. Use __DEQUALIFY, not only __DECONST to get rid of the volatile too. Consolidate ifqueue macros for the upcoming ALTQ work. Remove the support for lib/csu/${MACHINE_ARCH}-elf. ether_input already handles BPF and dropping of Use BPF_TAP and BPF_MTAP instead of the expanded versions where possible. Mostly switch to the patch framework. For cvs.1.patch, it's easier to do by Fix a type in rev. 1.16 Remove !_KERNEL parts. Remove the _KERNEL parts. - Add support for attaching alternative DLTs to an interface. Add some spaces for typographical correctness. Fix the quoting of MAKEFLAGS as noted in rev. 1.47. Don't build a new mbuf and bpf_mtap the old one. Use and define size_t as specified by SUSv3. Add a default for Perl 5.8.5 to override the default for Use M_INTWAIT, not M_NOWAIT. We don't really support fast interrupt ANSIfication and minor style cleanups. remove bad semicolon We don't currently build ld-elf as binary for i386 and even if Don't build sysctl as bootstrap tool. This helps us avoiding FreeBSD 4 Remove unused compat code like FXP_LOCK, a fake struct mtx. In preparation for the jail commit, include sys/param.h instead of sys/types.h. sys/types.h ==> sys/param.h Split vn_fullpath into cache_fullpath and vn_fullpath. The former just Uncomment the entry for kern_chrot in kern_syscall.h and change the Add jail_attach syscall. Regen. Don't copy blindly MAXPATHLEN byte from fullpath. vn_fullpath Cleanup last commit. Remove local ncp, it shadows the parameter, add Regen. sys/types.h ==> sys/param.h sys/types.h ==> sys/param.h Fix warning by adding explicit () around addition. Use PRIx32 to fix compiler warning correctly. Buglet in last commit, the first argument of bpf_ptap is the actual bpf_if. Remove unused local variable. Define in_inithead in net/route.h, it's purpose is the initialisation Add missing "return(error)". Include jail.h after proc.h to get struct ucred. Add jail_attach support. Remove GPLed fpemulation, old rp, old awe and pcic. Remove unused variable. Remove compat code for anything, but DragonFly. Add prototype for ciss_print0 in place. Make const correct. Hide pure kernel threads from jailed processes. Remove SCARG junk. Always honor fdp->fd_nrdir as root. Once the loop reached NCF_ROOT, *_load_pallete's second argument should be const. Don't double assign -W -Wall and some more warning flags. GCC 3.4 doesn't include a #pragma weak reference, if the symbol Change the ALTQ macro stubs to actually work. For the current ALTQ less Change default perl version to 5.8.6 Fix CPP buglet. Don't cast int ==> unsigned char ==> char, int ==> char is enough. Don't use the statfs field f_mntonname in filesystems. For the userland Free temporary buffer in the buffer overflow case too. GCC 1.x is dead. Read eaddr in two parts (32 bit read and 16 bit read). This fixes a No namespace pollution in sys/cdefs.h. Rename VM_CACHELINE_SIZE Fully separate the kcore initialisation and the kinfo wrapper. Don't define _Bool for GCC 3.0 and later. It was added was builtin Remove debug flag which slipped into last commit. Use M_ZERO instead of manual bzero. exit() needs stdlib.h strcmp() needs string.h Forced commit to note that I'm removing the default VINUMDEBUG for now. Remove the default VINUMDEBUG option for now. I'll make most of the Provide a mechanism for dumping relocation information. No need to zero fill memory, mmapped anonymously. Kernel will Remove -DFREEBSD_ELF. give out a little more information in case of a missing dependency If we change obj_rtld.path after initialising __progname, make sure we Do not depend on existence of _end symbol in obj_from_addr, use Stop caring about GCC versions between 2.5 and 2.7, they are Readd the copystr for f_mntfromname of root. It wasn't meant to be Do initialise fp to NULL explicitly, the last comment wasn't enough. - Add support for DT_FLAGS. Add a macro SYSCTL_SET_CHILDREN. Use it to avoid lhs cast. During init time, we can savely allocate the mbuf cluster with Remove the conditionalized FreeBSD 5 code. Keep the capability assignment, Add missing */. Don't cast lvavalues. - convert to bus_space macros Include pci_private.h to get pci_class_to_string. Set so->so_pcb directly to NULL to avoid lvalue cast. Add -DIN_TARGET_LIB to prevent libobjc from depending on in-tree GCC code. Directly use ifp->if_snd, it's type will change soon. Don't assign &ifp->if_snd to a temporary variable, it's type will change Use ifp->if_snd directly. Use IFQ_SET_MAXLEN. Remove stale local variable ifq. Add the 'All rights reserved.'. It's not entirely clear if this is still Remove unused variable. Import ALTQ support from KAME. This is based on the FreeBSD 4 snapshot. Add ALTQ support to pfctl(8). Fix a small bug introduced earlier altq include dir install altq headers Remove unnecessary packed attribute. Remove unneeded packed attributes. Remove extra token after #endif introduced in last commit. Separate error handling path from normal return to avoid GC strsize. GC local variable size. Clean-up. Mark dead. Not supported and if we ever want to support it again, this needs Move mac = NULL initialisation up and simplify second conditional. Inline some users of SC_STAT to avoid lvalue cast. link in altq support. ALTQ support. ALTQ support. ALTQ support. Mark it as ALTQ ready too. Increase size of limit column by one character to better fit Add support for ICH6 and some nForce AC97 chips. Rename IFM_homePNA to IFM_HPNA_1 and IFM_1000_TX to IFM_1000_T. ALTQ support. Set ALTQ ready. ALTQ support. ALTQ support. Remove bogus check of ifq max len. ALTQ support. Introduce vnodepv_entry_t as type for the vnodeopv_entry functions. Use ifq_is_empty to allow USB drivers to support ALTQ. Be more careful when doing el_parse() - only do it when el is ALTQ support. Don't cast lvalues. Avoid casts as lvalues. Replace lvalue cast with explicit cast from u_short via int to void *. Replace lvalue cast. Make the second argument to ncp_conn_asssert a const char *. ALTQ support. Change second argument of dc_crc_le to c_caddr_t to fix warning. Fix lvalue cast. Replace list of checks with loop. Fix lvalue casts. Fix lvalue cast. ALTQ support. ALTQ support. Forgotten in last commit. ALTQ support. ALTQ support. ALTQ support. Add __pure as attribute. A __pure function can only depend on the The old USB ethernet code utilized a netisr to hand packets over Rename PACKAGES to REL_PACKAGES. PACKAGES is used by ports already. Use -pthread instead of adding -lc_r for linking the thread library. GCC supports two pseudo variables to get the function name, __FUNCTION__ ALTQ support. ALTQ support. ALTQ support. ALTQ support. ALTQ support. Use ifq_set_maxlen and don't change the field directly. ALTQ support. ALTQ support. ALTQ support. Don't inline lnc_rint and lnc_tint, it's useless and GCC rightfully complains ALTQ support. Call the "cluster_save buffer" type just "cluster_save", it doesn't fit ALTQ support. ALTQ support. ALTQ support. Use ifq_set_maxlen instead of messing with the fields directly. Depend on ntpd. ntpdate is gone and the behaviour is the default now. ALTQ support. ALTQ support. ALTQ support. ALTQ support. Add m_defrag_nofree, which works like m_defrag, but doesn't free the Fix a bug introduced earlier. We can't put packets back into Prepare for ALTQ. ALTQ support. ALTQ support. Fixes a small bug which could result in packets showing ALTQ support. ALTQ support. ALTQ support. ALTQ support. Move the m_freem from vr_encap to vr_start, making the passed mbuf effectively ALTQ support. ALTQ support. ALTQ support Replace dc_coal with m_defrag. ALTQ support. ALTQ support. ALTQ support. The error handling in nv_ifstart should be reviewed, Remove tulip_ifstart_one, always process the whole queue. ALTQ support. Remove altqd's init script, we don't have it anyway. Don't depend on altqd. We don't need syslogd support for chrooted ntpd. Require NETWORKING, not DAEMON for ntpd. The former makes much more sense Rename local variables and arguments "index" to "idx". Some cleanup. Use ether_crc32_le. Expand some macros. Use normal inb / outb / inw / outw / inl / outl functons. Remove LE_NOLEMAC and LE_NOLANCE conditionals. Remove TULIP_USE_SOFTINTR support. If someone really needs it, we can TLUIP_HDR_DATA was always defined, unconditionalize. GC __alpha__ support. GC TULIP_PERFSTAT support. GC TULIP_BUS_DMA. GC !__DragonFly__ section. GC TULIP_VERBOSE. GC TULIP_DEBUG. GC TULIP_NEED_FASTTIMEOUT GC TULIP_CRC32_POLY, TULIP_ADDREQUAL and TULIP_ADDRBRDCST. Add sigtimedwait.o and sigwaitinfo.o. Remove some duplicate FreeBSD CVS IDs, move some IDs to better places. More cleanup. Define default value for PRId64 to keep FreeBSD 4 happy. Conditionalize the source on INET and INET6 respectively. Temporary switch the cardbus interrupt from INTR_TYPE_AV to Split search for already loaded object into a helper function. Sync changes from OpenBSD. Most importantly, this adds reverse proxy support. Move the open, stat and recheck for already loaded objects into Also bumb the reference counters when a object is already loaded. Instead of messing with the internal name-to-oid conversion, Make init(8) WARNS=6 clean. Merge from vendor branch NTPD: Sync OpenNTPD with OpenBSD. Sync OpenNTPD with OpenBSD. Don't define unit field on DragonFly. We already have the unit number in fl_ifname. lvalue cast. Remove NO_WERROR assignment. Remove -Werror from CFLAGS and fix the missing part for WARNS=6. Use __DECONST for the linted interface violations. Remove pre-ANSI malloc prototype. Remove NO_WERROR. lvalue casts. Add some consts. lvalue casts. Mark symname const Update Atheros entries. Regen. Remove extra tokens after #else and #endif. Use u_char define from sys/types.h. Rearm receiver, it was lost in the conversion. Define __DECONST here for the sake of FreeBSD. Simplify patches by defining SA_LEN in one place. Rest of last commit. Define SA_LEN in one place. Reduce foot-shooting potential by adding -dP for cvs update and -P for WARNS=6 cleaness, some constification, some cleanups. RIP Alpha libc. Merge from vendor branch GCC: Add C++ ctype based on NetBSD-style ctype.h. Use NXCC to compile a runnable executable. The libc we are building might Use the standard isxdigit instead of the non-standard ishexnumber. Replace digittoint(c) with (c - '0'), we already know that c is a digit, Use isdigit instead of isnumber. Don't define isatty, it's already defined in unistd.h. WARNS=6 cleanup. Fix warning about pre-ANSI function types. Fix prototypes. This functions actually need a parameter. Use ULLONG_MAX, off_t is unsigned. Correct a mixup between NLSGRP and NLSOWN. Add LOCALE{DIR,GRP,OWN,MODE}. Remove bad line breaks. Don't use stdint.h on FreeBSD. Really use ALTQ. Don't try to prepend to an ALTQ queue. ALTQ support. Initialize all fields in kobj_class. Change Makefile to follow common white space rules. Hide -DALTQ and Sync with FreeBSD. Use getprogname() instead of depending on __progname where possible. Add bus_dmamap_syncs before bus_dmamap_unloads. typo Add generic build framework for message catalogs. Use bsd.nls.mk. Use NLS framework. Add data files for Character Set mappings and Encoding Scheme alias Add locale and character set descriptions. This differs from the forgotten file iconv frontend. Helper programs. Add i18n directories. I18N module build framework. Add UTF7 build wrapper. Add citrus backend code and iconv front end. This is intentionally Add wchar and multibyte related man pages. Remove MT_FTABLE, it is not used anymore. Remove the #if 0'd entries Add ALTQ-style enqueue / dequeue / request functions for traditional Always build altq_etherclassify. Change PACKET_TAG_* ids to be consecutive. Fix build. Move common CFLAGS additions up. Don't build / install the various historic troff papers to Don't create some directories which will now be empty. Don't build groff as buildtool, it's not needed anymore. Move ctype man page from locale to gen, they are locale-sensitive, but Use getopt_long's getopt implementation. Use libc's getopt_long, I've been building and using ports for a while with Another man page which was moved to lib/libc/gen. SUS says that getopt() should return -1 immediately if the argument is "-". Correctly return -1 for "-" as argument as required by SUS. Add '+' to argument list of find to prevent getopt from reordering Back out switch to getopt_long implementation of getopt, it breaks Merge r1.2 from NetBSD: Move SHLIB_MAJOR / SHLIB_MINOR assignment up into Makefile.inc. Use NXCC to build make_hash and make_keys. Instead of using the non-standard conforming %+ format string, GC Stop building old GDB. Pass const ** as second argument to the command functions, Raise WARNS to 6: WARNS=6. Fix warnings. idx is always used, shut GCC up by initialising it to 0. Fix some warnings. s/index/idx. Add an explicit default case to teach GCC that idx is Make dktypnames const. More stuff for Citrus. More Citrus code. Add Citrus files not conflicting with the current rune implementation. Add some more path definitions as used by the Citrus framework. Remove redundant declarations. Convert to ANSI C function declarations. Use a common source for the string to integer conversion. Add Fix warnings. Raise WARNS to 6. WARNS=6. WARNS=6. Generic firmware support. Currently implemented is loading from Add /etc/firmware, remove /etc/gnats. Minor style changes. Add kqueue overwrite for libc_r. We have to trace the opened descriptor. Override _kevent, not kevent. This should fix the DNS issue, since Change prototype of sys_set_tls_area and sys_get_tls_area to take regen Adjust userland prototypes as well. int size --> size_t size Fix reboot -k, it didn't truncate /boot/nextboot.conf. No more Alpha support for RTLD. Defer work from the signal handlers into the main loop. Check for the signal handlers another time, they might not disrupt Include sys/types.h to make it self-contained. Fix handling of deallocation of dynamic TLS, the previous code could Don't activate -fstrict-aliasing by default, not even with -O2+. Use the correct type in va_arg call, char is promoted to int before calling License typo. Remove redundant panic. Remove the pre-NEWPCM sound drivers and the speaker-based emulations. Separate M_NULLOK from M_RNOWAIT. Cleanup the TLS implementation: New strcspn implementation, which is O(strln(str) + strlen(chars)) GNU getopt resets itself partially when the application sets optind to 0. Fix warnings, use ISO prototype. Remove tcb_size and flag argument for _rtld_allocate_tls, Remove extern for functions, line up function names, remove option names. Avoid discarding constness. const correctness Declare prototypes for all functions. Fix non-BPF declaration of bpf_ptap, it was out-of-sync with the header. Make it O(strlen(s) + strlen(charset)) like strcspn. Back out part of last commit, optind has to be initialised to 1. Move the processing of malloc flags out of the loop. The exception Move the processing of flags out of the loop. The exception is M_WAITOK, Remove reminders of VoxWare. Remove VoxWare related entries, don't list drivers support by NEWPCM. Split pcm into the generic framework (pcm) and the sound cards (snd). WARNS=6. WARNS=6. WARNS=6. const changes. Remove pre-FreeBSD 3 compat conditionals. WARNS=6 Add a macro to print the list of current processes independ of wait state. When ALTQ was detached from an interface queue, the function pointers Replace TIMER_USE_1 kernel option with hw.i8254.walltimer tunable. Sync nv(4) with nvnet-src-20050312. Since optind=0 has a special meaning for GNU getopt compatibility, Revamp getopt(3) usage: Add nv(4) man page. Add closefrom(2) syscall. It closes all file descriptors equal or greater Regen. Add userland prototype for closefrom. Use kernel's closefrom. Tell the world that we have closefrom. Don't just undefine USE_RC_SUBR, because it does define the list Simply the loop based on the knowledge that fd <= fdp->fd_lastfile First bunch of dump(8) cleanups. Remove unused functions. Compute buffer length only once per rmtcall. Split atomic into two functions, atomic_read and atomic_write. Warning fixes. Move initialisation of wrote into the loop to prevent longjmp/setjmp Remove argument names from prototypes. Adjust prototypes to deal Don't write to tape, strdup it first. Remove a cast, correct another. Decommon a lot of variables, makes some static, fix a small bug in rmtcall Remove another argument name in a prototype. Simplified NTP kernel interface: wether ==> whether Use corrected system time internally, adjust old offsets after Use corrected system time internally, adjust old offsets after Merge from vendor branch NTPD: Sync with OpenBSD. This fixes the bug of -s not working and allows leap second, not leaf second. fix totally stupid, but consistent spelling Ignore replies with a negative delay. Correct time of next query and Fix a small stack disclosure in ifconf(). Extend a local buffer to workarond a buffer overflow for now. Fix the parameter order of __stack_smash_handler. Also print the GC OLDCARD userland tools. Remove pccard RCNG script, the daemon config, the depency on pccard Use the headers from src/include for ctype.h and runetype.h instead Add snprintf and vsnprintf. Make kern.ntp.permanent specify the frequency correction per second, Switch IP divert from mbuf based tagging to mbuf tags. Use the local runetype.h here too. Sync with OpenBSD. The ntpd -s is included now. Merge from vendor branch NTPD: Sync with OpenBSD. The ntpd -s is included now. WARNS=6 + minor style issues while here. Sync in my paranoia check. Merge from vendor branch NTPD: Print "reply from ..." before "adjust local clock ..." in debug mode. Merge from vendor branch NTPD: Provide correct PLIST_SUB value for batch scripts. Correct a bug in the positive 32bit overflow handling of ntp_tick_acc. Add support for ServerWorks chipsets. Use get_approximate_time_t(). basetime should be static, all access done via PKI. Remove some commented out warnings. Remove old beforeinstall hack. Should be not used for ages. Un-disable stack protector. I take responsibility for all problems this WARNS=6. The (int) case should be reevaluated when this code is converted RIP compat libraries. Use misc/compatXX from ports instead. Make GCC 3.4 the default compiler. Add _DATE_FMT to get the locale-specific date(1) string. Use new _DATE_FMT langinfo instead of '%+'. Unhook PCVT from kernel build. Stop building libkeycap, PCVT is gone. Unhook GCC 2.95 and Binutils 2.12 from build. date as bootstrap tool doesn't make sense, remove it. Make osreldate.h building happing by explicitly using /bin/date. Override gcc2 for buildworld. Necessary to handle the former default As long as a peer has some trust left, use the aggressive retry interval Consider only drifts smaller than 32 ms as negligible. Bump major numbers in preparation for libc work. Don't create directories for gcc2 and binutils212 anymore. Build aicasm as host program, not via world's compiler. Complete Citrus import. Import message catalog implement from Use system version of getopt_long and basename for the bootstrapping tools. Unhook gperf, it was only used by gcc2. RIP PCVT userland. Remove RC scripts for stuff we have don't have in our tree. Add frequency correction support. The drift factor is currently Don't install wscons. Remove obsolete rcNG scripts as part of make upgrade. Use normal variable names for rtsold, provide default values. Move to CPU #0 in settime() to prevent races. Back out last commit, we are not there yet and I have to study Don't call cpu_mb1 after lwkt_setcpu_self, but call it internally Remove MIPS bits. GC more mips parts. Make kern.ntp.delta preemption save. Add forgotten SYSCTL_ADD_QUAD, move the SYSCTL_ADD_INT up where it belongs. Add support for unsigned quads. Use strtoq and %qd / %qu. Always use SYSCTL_OUT, sysctl doesn't print the value otherwise. Fix stupid order bug. The code should ignore the first sample(s), Properly create and destroy the DMA maps. ANSIfy and fix function casts. ANSIfy. CISS quirk. Ansify getcwd and declare the syscall prototype. First stab at WARNS=6 cleaning. More will be done once I figure out Fix warnings, ANSIfy, constify. const correctness Readd ypresp_allfn, now correctly typed. Fix warnings. Unconditionalize HAS_UTRACE, we never have NetBSD syscalls. Remove compat junk, __getcwd always exists on DragonFly. Remove more __NETBSD_SYSCALLS. Fix warnings, remove unused headers. Remove additional { }. Fix warnings. Sprinkle const. Use size_t in some places. ANSIfy. Fix warnings. Fix warnings, ANSIfy. Fix warning. vfork can clobber the local stack frame, use fork(). We might also Fix warnings. use uid_t / gid_t for prototype in stdlib.h, ANSIfy. Use namespace mangler. ANSIfy. Fix warnings. Ensure that the directory fits into memory. Fix warnings. Include sys/types.h to get uid_t and gid_t. Include guard. Fix warnings. Use sysctlbyname. Really include patches, don't let them catch dust. Fix warnings. Fix warnings. Change arc4random_addrandom to pass the more natural uint8_t * and Add prototype for __creat. Fix warnings. Use sysctlbyname. Correct and improve __diagassert. Work around restrict. Explicitly initialize e. The code flow looks safe, but having Fix warning. Always use strlcpy, in the last case also check the return value to Remove useless void * before free. Remove local prototype of It's dead, Jim. ANSIfy, no (void) before functions, include stdlib.h for prototype. Correct types for devname[_r]. Remove dead code. Fix warnings. Remove unionfs hack. DTF_NODUP is now a NOP. const / sign correctnes const correctness Mark name as const char *, not char *. ANSIfy, fix warnings. Always use strlcpy, in the last case of possible truncation also check Remove dllockinit, it's been deprecated and is a NOP anyway. Remove dllockinit. Fix warnings. Use size_t for length, not int. ANSIfy, use strlcpy from strncpy, fix most warnings. Fix warnings. Use size_t for the number of elements, use sysctlbyname instead of Fix warnings, use strlcpy instead of strcpy + manual check. When bitwise iterating over in_addr_t, use it for the loop variable too. Include string.h, use strlcpy. Use const for internal casts to not conflict with const pointers. Remove unused local functions. Const correctness. Sign correctness. DragonFly has decided to depend on char being signed, use it. Declare __system first. ANSIfy. Use __DECONST for interface const violation. Fix the sign issue by reordering the operations. Use memcpy instead Rename symlink to my_symlink to avoid global shadowing. Fix warnings. Declare environ on file scope. Don't declare __findenv as inline, Always setup the initial TCB correctly. This saves us from having to Add support for TLS. For the initial thread, rtld has already created the TCB and TLS storage. Slight correction for the last commit, thread TCB == NULL as error Forced commit to correct imporant spelling error: Always allocate static TLS space. Readd explicit u_char casts for tolower(). Readd lost line. PANIC for now, if the linker can't allocate TLS space. Including errno.h and still declaring errno is BROKEN. extern int errno considered harmful. de-errno de-errno de-errno de-errno de-errno de-errno de-errno Only install man page for libc. Separate INTERNALLIB and INSTALL_PIC_ARCHIVE. We want to have the latter Make a special libc version for RTLD which doesn't use TLS as it will Prepare for thread-local errno by implementing full TLS support for sys_set_tls_area is called from _init_tls path, it should not touch Add NO_PKGTOOLS to not install the FreeBSD derived version of the tools. Include __error in libc_rtld too, otherwise RTLD is left with an undefined Link with --no-undefined to enfore the normal missing symbol check, Remove sys_set_tls_area hack, doesn't matter when we fault in case isprint() should be true only for characters in space, not blank. Override closefrom() in libc_r to prevent it from closing the Explicitly close low descriptors to keep the internal state Forgotten major bump. We have to allocate the TLS area for _thread_kern_thread too. Remove PCVT related entries. Make errno a thread-local variable and remove the __error function. Spring-cleaning. More spring-cleaning. More spring-cleaning. Refine USE_RC_SUBR / USE_RCORDER handling. The install-rc-script target gnu/usr.sbin and gnu/libexec have been empty for ages, so save us Move old locale sources into the attic. Sync with FreeBSD. This adds read-only support for zip and ISO9660. Sync with FreeBSD. This adds read-only support for zip and ISO9660. Merge from vendor branch BSDTAR: Merge from vendor branch LIBARCHIVE: Sync with FreeBSD. This adds read-only support for zip and ISO9660. Don't bother dealing with hidden syscalls, just do it for all. Remove obsolete patches. Allocate some additional space for dlopen'd libraries. Currently 256 byte, Split libc and libc_r. -pthread now links aginst both libc and libc_r. Typo, linking against -lc_p belongs into the -pg case. Left-over from the old mbuf chain tagging. ALTQ Add handling of R_386_TLS_TPOFF32. Bite the bullet and add real masks isprint and isgraph. Use this as Nuke ctypeio.c and associated ctype to rune table conversion, Catch up with ctype.h. Replace ifq_handoff like code with a call to ifq_handoff. atm_output did almost the same as ifq_handoff, it just skipped Use ifq_handoff instead of handrolling it. Use ifq_handoff instead of the hand-rolled version with inlined 0x7f is only a control character. Typo in '(' description. Replace __offsetof with similiar pointer expression and use "m" Remove sc_unit and use if_printf / device_printf instead. Remove obsolete comment. Merge from vendor branch GCC: Remove two unnecessary entries. typo Simplify code by using IF_DRAIN. Replace local tulip_mbuf_compress with m_defrag call. Remove unused second argument to tulip_intr_handler. Use device_printf / if_printf and stop abusing the ifnet fields for Don't activate -funit-at-a-time with -O2 and higher by default, when WARNS=6 clean already. Fix pointer arithemetic. Add an explicit abort in printaddr to avoid unused variable warnings. Move if_initname call before xl_reset to ensure correct initialisation. Reorder initialisation by calling if_initname before vr_reset. Remove an incorrect free. The code path should normally not be hit, but Rework TX EOF handling. We have to always check for TX underruns, if_printf/device_printf cleanup. Remove minor junk. Use bus_alloc_resource_any. Use pci helper functions, don't roll them Exploit bus_alloc_resource_any. style(9) Use ether_crc32_le instead of local hack. Convert bge(4) to the new M_EXT API. This allows merging the dynamic Spurious semicolon broke gcc2 build. style(9) and nic style changes. Cleanup bfe_probe. device_printf / if_printf cleanup. duplicate is Fake DES was removed a while ago, remove the prototypes as well. Remove spurious semicolon. Merge from vendor branch GCC: Update in-tree GCC to 3.4.4. Update in-tree GCC to 3.4.4. Merge from vendor branch GCC: Update for GCC 3.4.4. Note that the prefered way to fetch just Revert to old local unit allocation until I find a better solution Use ether_crc32_be. if_printf / device_printf. No FreeBSD 5/6 support here. No machine/clock.h needed. Use ether_crc32_le. Add a device ID for MPI350. if_printf / device_printf. Use bus_alloc_resource_any when possible. Convert to new m_ext API. style Use ether_crc32_be. if_printf / device_printf and some further cleanup. Nuke further compatibility junk. if_printf / device_printf and some more cleanup. Use ether_crc32_le. Remove compat junk. Force jumbo buffers to be a multiple of 64bit. Convert to new m_ext API. Remove some compat junk, deindent a switch. Add missing splx(s) in sk_detach. style. remove some unused variables. Use ether_ioctl for the default case in dc_ioctl. Merge SIOCSIFADDR, Fall through to ether_ioctl and merge those cases which just called it. Use ether_ioctl for the default case. Merge cases which just called it. Don't call ether_ioctl first, check for errors and call it again Don't cast command to int. Use ether_ioctl for the default case and merge the cases which just Minor reorder of the code to make it easier to deal with capabilities. Remove nge_jpool_etrny, not used anymore. Convert to new m_ext interface. This also fixes a memory leak, since Use ether_crc32_le and ether_crc32_be. style and minor cleanup Use ether_crc32_be. Forgot to assign ext_buf, resulting in a panic. libc_r has to provide strong versions of the public symbols to override if_printf / device_printf. Use ether_crc32_be / ether_crc32_le. Use PCI accessor functions instead of messing directly with the config if_printf / device_printf cleanup. if_printf / device_printf. Remove bogus check if interface was already attached. The function is Use PCI accessor functions instead of messing directly with the Remove __inline hints, let the compiler figure out the details. Let fxp_release take a device_t directly and change some device_printf No need to bzero softc. cleanup Convert to new m_ext API. If we want to abuse an API by providing a callback doing nothing, we Remove M_EXT_OLD, rename m_ext.ext_nref.new to m_ext.ext_ref and Don't match executables with ELFOSABI_NONE against the brand list. Update to file 4.13. Put the contrib files into contrib/file-4 instead Merge from vendor branch FILE: Update to file 4.13. Put the contrib files into contrib/file-4 instead Increment subversion to allow matching for ELF ABI tagging enabled kernels. Teach file about DragonFly's ELF ABI tagging. Add back support for SYSV binaries. Those are nasty since Stop branding DragonFly binaries with the FreeBSD ABI. Removing rest of debugging code which slipped into the commit. if_printf / device_printf. remove stored device_t reference as it is Back out last commit, this wasn't supposed to crep in. Don't name arguments in prototypes. Nuke __STDC__ conditional. Don't commit half of a change. Prefix parameter names and local arguments Use IF_DRAIN. reorder declarations Add missing parameter. Include sys/thread2.h to unbreak build. Rename label to not collidate with local variable. Conditionalize error, Expand itjc_bus_setup, it declares variables. Add a new macro IF_LLSOCKADDR which maps a ifnet pointer to the Another instance of IF_LLSOCKADDR. Instead of checking for ifnet_addrs[ifp->index - 1] == NULL to detect Instead of using ifnet_addrs and following ifa_ifp, use ifindex2ifnet Add f_owner (user who mounted the filesystem), f_type (filesystem type ID), Merge the pointer to the link-layer address into ifnet and remove Make call to arc_ioctl the default case. Remove splimp in xl_attach, the interrupt is created last and Replace splimp with critical sections for now. Fix a bug in xl_init, Always hook the interrupt up last, we don't have to worry about Use WAITOK allocation, fix some arguments and remove a now unused Let the system deal with device shutdown, don't do it yourself. Convert splimp to criticial sections for now. Cleanup the critical Fix detach order: We have to unhook the interrupt first and leave the Reorder initialisation to make protection unnecessary. Fix some bugs in the last commit. We have to call ether_ifdetach if we device_printf / if_printf and some minor nits. Use PCI helper functions instead of hand-rolling them. Remove now Use ether_crc32_be. Don't bzero the softc, it is already zero. Use M_WAITOK for contigmalloc now that the attach path is interrupt-save. Explicitly note that updating from pre-1.2 to PREVIEW or HEAD is not Convert to critical sections, move timer reset into the protection. Convert to critical sections. No need to protect the interupt from racing Add /usr/pkg/etc/rc.d to the rcNG search list. Reorder critical sections to be as short as possible by moving invariants out. if_printf / device_printf Use the PCI helper functions and nuke now unused macros. Move the callout_reset into the critical section. Use epic_detach for error cleanup during attach. Make attach interrupt Remove a useless assignment. Move callout_reset into critical section. We know that tl_probe is run first, so turn the check for a bad device Don't bzero softc. Setup interrupt last to get tl_attach ISR race free. Really use M_WAITOK. Typo. Convert from spl* to critical sections. Remove compat junk. Convert splimp to critical sections. Stop abusing splbio simply because others do the same. Use critical Switch to critical sections, fix some possible minor nits with ISR - convert to critical sections - setup interrupt last Only delete the miibus if it was attached first. Convert to critical sections. Convert to critical sections. Convert to critical sections. Convert from splhigh to critical sections. Convert to critical sections. Fix an early return without restoring Convert to critical sections. - convert to critical sections Convert to critical sections. Remove compat junk. Hook up interrupt Missed one splnet. Convert to critical sections. - convert to critical sections No interrupts, no M_NOWAIT. Use M_WAITOK instead. Remove compatiblity code. Convert to critical sections. Convert to critical sections. Convert to critical sections. Convert to critical sections. Add a missing ether_ifdetach when Convert to critical sections. Use bge_detach as common error path Convert to critical sections. Fix build. Convert to critical sections. Convert to critical sections. - convert to critical sections Use critical sections. Convert to critical sections. Convert to critical sections. - convert to critical sections Convert to critical sections. Remove old attempt at locking, it was incomplete and partially incorrect. Convert to critical sections. Convert to critical sections. Nuke code dealing with empty address list or trying to set the link-layer Enable port and memory-mapped IO in the PCI layer when the associated device_printf / if_printf. Use PCI accessor functions. Use ether_crc32_be. Nuke compat code. device_printf / if_printf Use PCI accessor functions. Use local storage instead of ac_enaddr in tl_attach. Use local storage instead of ac_enaddr. static != extern device_printf / if_printf Use PCI accessor functions. Fix an uninitialised variable I introduced earlier. Hook up interrupt last. Use ti_detach as common error path. No interrupts, no M_NOWAIT. Use local storage instead of ac_enaddr in ti_attach. Resource allocate now turns on port / memory bit in the PCI command reg, Convert to critical sections. Convert to critical sections. - convert to critical sections Use PCI accessor functions and nuke the port / memory enabling. Convert to critical sections. Convert to critical sections. Convert to critical sections. Convert to critical sections. Add missing breaks. - convert to critical sections - convert to critical sections Convert to critical sections. Convert to critical sections. Be a bit more conservative for now. simplify. Use if_printf for TUNDEBUG. Convert to critical sections. Use if_printf. Convert to critical sections. Forgotten from the ALTQ conversion. Convert to critical sections. Convert to critical sections. Stop updating interrupt masks. Convert to critical sections. typo Nuke compatiblity parts. Missing ether_ifdetach in error path. Fix another bunch of missing ether_ifdetach calls. Convert to critical sections. Drop a splz. Convert to critical sections. Stop messing with the interrupt masks. Convert to critical sections. Convert to critical sections. Convert to critical sections. Rename a local variable from s to i, Convert to critical sections. Convert to critical sections. Remove now unused variables. Convert to critical sections. Convert to critical sections. Convert to critical sections. Convert to critical sections. Let the compiler decide what code should be inlined and what not. Convert to critical sections. Convert to critical sections. Convert to critical sections. Convert to critical section. Convert to critical sections. Merge from vendor branch GCC: Include the fortran library sources from GCC 3.4.4. Unhook old g2c from build and start building the GCC 3.4.4 version. Remove old libf2c. Fix some glitches. Also include files in the .OBJDIR. - initialise interface name early Use PCI accessor functions. Stop changing port / memory bits manually. Use ether_crc32_be. Use vr_detach as common error path. - pass dev directly to txp_release_resources, call it only once Use PCI accessor functions, don't change memory / port bit manually. Use queue(3). Use ether_crc32_be. Use local storage for MAC instead of arpcom.ac_enaddr. Explicitly set error before jumping to fail. Common PCI probe style. Setup interrupt last in txp_attach. Protect against concurrent interrupts Use queue(3) macros for if_multiaddrs. queue(3) for if_multiaddrs. Restore copy of revid in softc. Add BCM5751. Regen. Add BCM5751. SuSE includes a special .note.SuSE section descriping the version of Defancy the infinite loop. Fix a bug where the loop wasn't left when Instead of emulating a userland system call via stackgap, use Remove redundant verbosity. The description of the parent just costs space Hack in support for ar(4) based devices. Use if_printf, especially for DPRINTF. Merge two DPRINTF statements Remove now unnecessary messing with PCI command register. sort don't include regression tests, we don't use them anyway. more than enough Tear down interrupt in wi_free when necessary. Add PCI ID for BCM4401-B0. Regen. Add support for Broadcom BCM4401-B0. Push device_t's down to vxattach, not the softc. Remove some unused macros. Constify. Minor style changes. Remove a bunch of breaks after return, merge a return into a default case. Fix some stupid style bugs. Call bus_setup_intr after vxattach, Slightly chance the order of interrupt handling. First hook the Deorbit Alpha. Use local storage for MAC address. Use M_ASSERTPKTHDR. (Obtained-from: FreeBSD) Allow inclusion of Citrus modules in statically linked binaries. Declare module for mapper_parallel explicitly to allow static linking. Sync with NetBSD: Sync with NetBSD: Update to file-4.14. Remove merged patches. Merge from vendor branch FILE: Update to file-4.14. Remove merged patches. Add forgotten wcstoull. Remove explicit int casts for the array index. While it doesn't Sync with master copy. This is necessary to allow installworld from Use pcidevs. Use common probe style and PCI helper functions. It's dead, Jim. Convert to NEWBUS, remove ISA compat. Allow disabling of unused parameter warnings. This is handy for third Allow radio and statistic dump to use arbitrary interface names. Merge from vendor branch OPENPAM: Import OpenPAM Figwort. Add /usr/lib/security, to be used for PAM modules. Build the PAM modules under lib/pam_module, since they are not Catch up with Perl version. Revert part of the ALTQ conversion. It unintenionally removed code which Import current pam_tacplus from FreeBSD HEAD. Also install the pam_tacplus(8) man page. Import current pam_opie(8) from FreeBSD HEAD. DPADD is currently broken since LIBDIR != /usr/lib. Import current pam_krb5 from FreeBSD HEAD. Import current pam_opieaccess from FreeBSD HEAD. Import current pam_radius from FreeBSD HEAD. Import current pam_ssh from FreeBSD HEAD. Hook up remaining PAM modules. Don't use DPADD for now, it's conflicting Switch to OpenPAM. The PAM modules are now installed in /usr/lib/security Define struct in_addr in both, arpa/inet.h and netinet/in.h, A lot of software depends on netinet/in.h being self-contained, Prepare for moving from /etc/pam.conf to /etc/pam.d. Remove unused junk. Change to common PCI probe style. Use ether_crc32_be. Use if_printf most of the time and remove the device_t stored in softc. Move PCCARD attachment into separate file. Use the NEWCARD helper Call bus_setup_intr in xe_attach, not xe_activate. This prevents PAM is dead, long live PAM! fix typo, SRCS should contain the .c file. Prepare for using the "official" PAM support. Merge from vendor branch OPENSSH: Prepare for using the "official" PAM support. MODULE_DIR must include the final '/'. pam_skey is not supported anymore, remove it from the default config. Instead of duplicating the Kerberos tools, use a single version. Switch to auth-pam.c from OpenSSH to unbreak Kerberos 5 build. Fix missing initialisation of big numbers. BN_hex2bn behaves de-nop part introduced in the last commit. Remove the minigzip example from libz. Since the real gzip is based on Switch to zlib 1.2.3, imported under contrib/zlib-1.2 to reduce impact Switch to zlib 1.2.3, imported under contrib/zlib-1.2 to reduce impact Merge from vendor branch ZLIB: Mark zlib 1.2.2 as dead. Some more junk to remove after manual installation. Add emulation of statvfs and fstatvfs based on statfs / fstatfs. Split monolithic /etc/pam.conf into separate files for each service pthread_self is used by stdio. Add shlock(1), a program for the safe creation of lock files from shell First step to cleaning up stdio. This breaks the libc ABI, all programs Merge __sFILEX into __FILE. Let __fpending handle the ungetc buffer Remove partial NetBSD support. It's pointless to have an emulation of regen. Fix C++. Just threat all ELF dynamic objects as shared libraries, don't verify Add description of FPU for floating point conversion functions. Add most parts of the wide char stdio support. This is not hooked up GC unused header from BSD libm. Unhook rc.d/network, it should be unused. Add support for HP hn210e usb ethernet. Install pam.d's README and convert.sh as part of upgrade_etc. Don't panic. Also races in the attach path. Drop GCC < 1 support, simplify inline assembly and use proper #error for Add a new feature-test macro __DF_VISIBLE for those functions we want to Simplify conditional by making use of __DF_VISIBLE and __ISO_C_VISIBLE. FreeBSD and NetBSD both use derivates of Sun's math library. On FreeBSD, Fix typo. Merge revision 1.38 from FreeBSD: cvtstat doesn't exist. Factor out an_detach, since the implementation for all busses is Remove useless .PATH entries inherited from FreeBSD. Use pcidevs and common PCI probe style. Remove unnecessary initialisations. Return ENXIO instead of 1 in Sync with FreeBSD(if_anreg.h 1.1.2.9, if_an.c 1.2.2.14 and Sync with FreeBSD(if_an.c 1.2.2.15, if_aironet_ieee.h 1.1.2.9) While setting up a transmit packet disable interupts on the card then Remove break after return. Eliminate USEGNUDIR and instead allow the target directory to be Instead of overriding LIBDIR, override the new TARGET_LIBDIR variables. Use TARGET_LIBDIR variables instead of overriding LIBDIR. Fresh installations don't have libtelnet installed and older FreeBSD Restore NCURSES_CONST constness. Stop installing profiling libraries as /usr/lib/lib${LIB}_p.a, because Partially backout last commit. /usr/lib is added by GCC anyway and the Since we have a table for each device anyway, we can also use it to GC unused variable. Move ostat definition from sys/stat.h into emulation43bsd/stat.h. Regen. Fix macro name in comment. Add passwd manipulation code based on parts of vipw and chpass. Import pam_deny, pam_permit and pam_unix from FreeBSD, use them instead Import pam_deny, pam_permit and pam_unix from FreeBSD, use them instead Merge from vendor branch OPENPAM: Add new option COMPAT_DF12, used for ABI compatibility with DragonFly libutil.h hsa to go after pwd.h. Instead of always defining FLOATING_POINT to get floating point, define Constify xdigs argument to __u[lj]toa. Cast u_long and u_quad_t to Add missing bit of the wchar stdio support and hook the whole thing in. Make nlink_t 32bit and ino_t 64bit. Implement the old syscall numbers Bump version to 1.3.4 for stat changes. Require 1.3.4 before installworld. Enforce COMPAT_DF12 for now, this can be overriden via NO_COMPAT_DF12 Regen. Readd fix for FreeBSD PR/30631. Catch up with reality, this is GCC 3.4.4. Clean up search directories to what we really use. Really support rpath only linking. Add a special option -nolibc which Remove unused include of sys/dirent.h. Add SYSCTL_NODE_CHILDREN. Add rman_set_device. ANSIfy. Don't depend on struct dirent == struct direct, but fully separate the Remove redundant assignment. Add ethernet port of JVC MP-PRX1. Move up CVS IDs, first must be DragonFly. Reimport devlist2h.awk from FreeBSD, this version actually works. It _PC_NAME_MAX is NAME_MAX, so use that for the storage allocation as Instead of trying to compute the local storage based on maximum entry Instead of MAXNAMELEN, use NAME_MAX for now. This should be revisited Don't check for zero-length direntries, expect the system to handle Don't match entries by hand, just use strcmp. It is efficient enough for MAXNAMELEN ==> PATH_MAX Both file and dp->d_name are NUL-terminated, so it pointless to first Use NAME_MAX instead of MAXNAMLEN and strlcpy, since dp->d_name is Match "." and ".." with strcmp. Make it actually compile without warnings. Use NAME_MAX instead of MAXNAMELEN, replace a strncpy with strlcpy. Use NAME_MAX instead of MAXNAMLEN. Cast ino_t to uint64_t + proper Add vn_get_namelen to simplify correct emulation of statfs with maximum Use vn_get_namelen to reduce bogusnes. Kill stackgap in (f)statvfs(64). Don't hide errors from kern_statfs by overwriting error, check if it was Use vn_get_namelen to provide correct f_namemax field. fsfind should use direct here, not dirent. When allocating memory for the index file, query the filesystem for the Just expect either the kernel or libc to drop empty dirents. Pass the direction to kern_getdirentries, it will be used by the - Propagate error code from various bus_dma functions in bfe_dma_alloc. Revive multicast support, it got lost in the initial import. Set both, CRC32 generation and LED modes. Clear powerdown control bit. If possible, use builtin constants for newer GCC versions, but fallback Add wcsftime(3). Sync with recent libc and libm changes. Add vop_write_dirent helper functions, which isolates the caller from Set baudrate to 100Mbps and advertise VLAN. Break long commits. Don't ask for transfers when there's nothing to Merge from vendor branch GCC: Update GCC 3.4 to current 3.4.5 pre-release. Update GCC 3.4 to current 3.4.5 pre-release. Use new vop_write_dirent function. Use vop_write_dirent. Add _DIRENT_NEXT, which is for now only used in the kernel to skip to Replace the 4.3BSD getdirentries compat function with something which is Use vop_write_dirent. Allocate a temporary buffer for the name for now, Fix merge bug. d_namlen is used by GENERIC_DIRSIZ, when it isn't We want to separate dirent and the maximum directory entry size. Rip off PROCFS_ZOMBIE, it wasn't even a knob to play with. Split the two parts of linprocfs_readdir into subroutines. Split the two parts of procfs_readdir into subroutines. Honor process visibility for jailed processes and ps_showallprocs for Sprinkle some const. Convert to vop_write_dirent. Utilize vop_write_dirent. Slightly change the order by writing to Use vop_write_dirent. Correctly handle the case of spare fd tables, e.g. Improve C++ support. Convert to vop_write_dirent. Add some new Broadcom IDs. Regen. 5705K, 5714C, 5721, 5750, 5750M, 5751M, 5789 support. Atomically load and clear the status block. This makes the bge Enable the memory arbiter before turning off the PXE restart. This Prevent spurious link state changes. HPFS != UFS, so use the right constant for directory entries. Check Create a kernel option BGE_FAKE_AUTONEG for IBM/Intel blade servers, GC unused macro. Also document BGE_FAKE_AUTONEG in LINT. Don't assume that ttys are always located directly in /dev. This Also allocate PATH_MAX for the threaded case. sendmail tried to limit directory names to MAXPATHLEN - MAXNAMLEN in an Back out accidental commit. We have to copy the pam.d entries after running mtree, otherwise Don't define infinite macros when we want to define infinity. No need to forget wcswidth. Make struct dirent contain a full 64bit inode. Allow more than 255 byte Sync GCC 3.4's propolice with Etoh's official version. This fixes the Remove space before '('. Correctly align some function names. No need to Smoke something else and revert the use of ssize_t, I've put it there Add real function versions of the _unlocked family. Use putc for putchar Retire old sendmail. Don't add files patches via *.no_obj.patch back to SRCS, make them a Update __DragonFly_version as well. Don't let this slip, please. GC openssh-3.9 Sync with FreeBSD HEAD. Add strnvis, which is orthogonal to strvisx by bounding dst, not src. Welcome OpenSSH 4.2. Merge from vendor branch OPENSSH: Welcome OpenSSH 4.2. Add some words about PAM. Add /usr/pkg/info to info search path for pkgsrc users. Move the modification of hspace out of the realloc call to make this a When testing whether a negative delta is smaller than the default Call resettodr on shutdown to ensure that RTC and wall clock gets Too many windhills in the third party software world to fight them all. Avoid text-relocations, binutils don't like them on AMD64. Add rcng script to start ftpd stand-alone. Add an option -H to override gethostname. Also install the new ftpd script. Add missing atoll. Fix a very, very old bug in the man page, lbl-csam.arpa doesn't Make the syslog message string static const, saves the copy. Don't depend on the DragonFly keyword anymore, all unsupported scripts Don't depend on DragonFly keyword for shutdown scripts as well. Honor NOFSCHG. Set POSIX feature test macros to the correct value as mendated by SUS. toascii(3) got lost during ctype conversion. Revert last change and add toascii.c to the right variable. Rework nrelease framework for pkgsrc. The way the bootstrap kit was Add g77 link. Add g77 here as well. Add a simple library for the default g77 linkage, providing main. Provide proper offsetof macro for C++. Prefer __size_t here, since Enable wchar usage. Remove useless define. nextpid is not public, so don't use it. It's not really useful anyway, Sleep before commit, remove trailing , Honour NOSFCHG for the kernel installation as well, allowing to install Allow static compilation of linprocfs using a LINPROCFS option. Further cleanup of GCC's builtin directory list in an attempt to When using visibility macros, sys/cdefs.h must be included first. Merge OpenBSD r1.104: Add a fix for CAN-2005-3001, via pkgsrc from Ubunto. Sync vfscanf with FreeBSD, which makes it almost symmetrical to pam_unix.so needs -lutil. Fixes qpopper issues as seen by Adrian Nida. Fix initialisation of wide char support in FILE. Problem reported Apply post-install correction of +CONTEXT files. nrelease now also Be more jail friendly: Add missing callout_init. Improve portability of patch(1): Merge bug fix from binutils main line: Introduce a new variable to hold the size of buf. Use localhost instead of 127.0.0.1, since IPv6 should work as well. Teach kdump a handy new trick: -p $pid selects the records of Include unistd.h to get isatty(). Has been lurking in my release tree Don't depend on POSIX namespace pollution with u_char from sys/types.h. Add wide char support for printf and friends. Fix a possible Fix format string to process all arguments. Noticed by Trevor Kendall Merge rev 1.96 of NetBSD's net/if_spppsubr.c: Move atomic_intr_t to machine/stdint.h and predent __ to reduce nullfs_subr doesn't exist anymore. Ensure that exit_group actually returns a sane value, not some random Add some more PCI IDs for the SATA300 controllers of newer Intel boards. Fix buffer overflow in config parser. Don't try to guess if the hunk was already applied if it doesn't have Check, complain about and accumulate errors when writing reject files Fix one inversed condition. Remove bootstrap code -- you should really have stdint.h by now. Just remove the README file, noone bothers with it anyway. _PATH_MKDIR is unused, nuke it. Make comment more readable by putting the copyright on a separate line. Merge revision 1.21 and 1.22 from OpenBSD. Add a missing "the" to the Fix typos in copyright. Justin C. Sherrill (14): Sync with FreeBSD 4-STABLE manpages. New logo of the DragonFly logo instead of the BSD Beastie. The old Added short descriptions for kern.acct_suspend, kern.acct_resume, and Added mention of using MAKEDEV for when the target slice for installation whereis will find a given application in various locations, including Remove my changes. PATH_PORTS is not checked for multiple entries as A cvsup file that pulls the "checked out" version of source. I'm referencing Removed freebsd.mc line, added ports/arabic line as found in the FreeBSD Testing auth forwarding. Me am idiot! Testing commits. Test commit of new file. Create DragonFly fortune files, based on either the FreeBSD fortune files This commit sets new users to see the DragonFly-tips fortunes instead Making dntpd server list match the recommendations at: Liam J. Foy (190): Test commit! Thanks eirikn! -Remove main proto - Remove main() proto - Remove registers - Fix some sentences -Setmode will deal with the octals, so we dont need to. - Add $DragonFly$ - Bump WARNS to 6 - Remove unnecessary (void) casts Add special case for the german whois nameserver. Without the -Set WARNS to 6 - Remove all registers in bin/. This saves us doing it each time we come Fix cpdup man page. The option -v[vvv] should be -v[vv]. Whoops! setmode(3) returns a void *, not mode_t *. The gr_gid member of the struct group should be gid_t Needed to add sys/types.h for previous commit. Spotted by Mr Dillon. - Allow ipcs to display information about IPC mechanisms owned by - Remove unnecessary selection statement to see if buf == NULL. Sync with FreeBSD rm(1) with my modifications. Some things have been - Remove (void) casts that are not necessary. - Remove space Small patch to fix the output of apm. It called print_batt_life twice - Remove main proto - Remove sys/time.h - Set the clnt error functions protos from char * -> const char * - Set the clnt error functions 's' argument to const char * - Use the correct error functions - Add $DragonFly$ tag - Add WARNS?= 6 - Import newgrp(1) from FreeBSD - Add newgrp(1) into the build - Add new -vv option. If the -v flag is specified more than once, - Remove unnecessary casts More cleans - Fix setmode. Setmode can fail due to malloc. We should print the correct - Sync with FreeBSD - Constify most of the function - Remove unnecessary casts - Remove registers from rcorder - strings.h -> string.h - Remove signal.h - We should always check the setenv() call. Why? because it uses - Check the return value of setenv(). We should check this value since - Whoops! I missed some from the previous commit. My bad, sorry. - We should warn with argv[i] not argv[1] - Remove both errno.h and string.h. Both are unnecessary. - Restructure the code - Complete re-write/re-structure of rev(1). The previous code was ugly! - Add $DragonFly$ tag - Change manpage to show the change from M_ -> MB_ - Deregister strlcpy - Remove unneccessary cast - Add WARNS?= 6 - Make *user, *group and *groupset local, constify them and initialise - Detect and exit on write errors (from FreeBSD, but our code) Sync daemon(8) with FreeBSD: - string.h is unnecessary, remove it -Setmode can also be caused to fail because of malloc(). - Remove *ttyname() - Static functions/globals - Clean sbin/dump of all registers. - Both <unistd.h> and <sys/types.h> can go - unnecessary - Make modestr const and initialize it (stop GCC moaning) Equivalent to: FreeBSD rev 1.82 - Bump WARNS to 6 - Remove duplicate headers - Static functions/variables - UNIX conformance: If -r -f on non-existent directory, don't emit error. - Remove both sys/wait.h & sys/types.h - unnecessary - Add -k option for whois.krnic.net (hold details of IP address - Add WARNS 6 - Add WARNS 6 and make WARNS 6 clean - Remove unnecessary headers - Clean up wc (remove (void) etc) - Complete re-write of sasc. - Remove a duplicate unnecessary check on errno. strtol(3) will only ever set General clean up of lptcontrol. Similar to FreeBSD 1.15 - Add -v option (verbose). Taken from FreeBSD with my modifications. Minor Patch - Initial import of the battd utility. - Tie battd into the build - Do not depend on stat_warn_cont when executing commands. - Large cleanup/changes to man page. - Improve how we handle the APM device. Much cleaner code. - Reduce the size of msg (1024 -> 80) - Remove unnecessary header (time.h) - Remove mode argument from open(). - Remove two unnecessary headers More cleaning: - Improve the debugging functionality in battd. - Fix some spelling mistakes. - Fix a few grammar 'nits'. - Remove unnecessary headers (string.h, sys/uio.h, utmp.h) - Use strcasecmp() for upper and lower cases. - isupper(x) can be true for x in [128..255], but since tolower(x) is - Better handling of a given(-p) invalid port number. - Add battd rcng script - Add battd to rc.conf(5) - Remove unnecessary header - Remove unnecessary casts - SUSV3 states on error we should return > 1 - Fix some xdrproc_t warnings (FreeBSD) - Remove unnecessary headers - The maximum value for a time_t is assumed to be <= INT_MAX, not >= INT_MAX - Remove an unnecessary call to stat(). We can just use lstat before - Static - Back out rev 1.6 noticed by Matt Dillon. However, still keep the use - Use variable 'len' instead of making an unnecessary call to strlen() - Make sure we free() 'modep' after setmode() - Back out part of previous commit. I thought free() was the prefered - Remove unnecessary header - Write status information to stdout instead of stderr. - perror() -> warn() - Update mesg(1) to confirm to SUSv3 changes previously committed. - Remove unnecessary header Minor patch: - Validate the -i option (Idea from FreeBSD) - Correct use of the err() family (err() -> errx()) - Validate -i option - Good english - Sync the example with the correct sentense. Clean up: - Remove sys/stat.h, sys/signal.h, varargs.h and ctype.h - Correct usage of gethostname(3) - Make sure we call exit() after clnt_pcreateerror() - Style(9) - sysctl(3), sethostname(3) and malloc(3) set errno, use it! - Remove unused variable - WARNS 6 - Correct usage of gethostname(3) - warn() -> warnx(). The global variable errno will not be set. - Cosmetic - Clearly state that errno is set - WARNS -> WARNS? - Let receiver know if the sender accepts replies - Use errno to report why freopen(3) failed - Check return value of setenv(3) - Add further functionality to check for invalid characters - ANSI - ANSI - Improve option handling - sprintf -> snprintf - Use socklen_t - Add missing include <stdlib.h> for exit() - WARNS 6 - When calling syslog(3), use %m for errno and lose strerror(3) - Use socklen_t - Produce more informative output to syslog. Generally improve/clean error handling - The AC Line state can be returned as 2 meaning 'backup power'. The previous - Actually make the debug code work - WITHOUT_ERRNO should be 0, silly mistake. - More fixes for the debug code - util.h -> libutil.h - Use pidfile(3). The pid file can be used as a quick reference if the process - Fix memory leak if realloc(3) failed. Use reallocf(3). - Minor restructure. Don't bother calling makemsg() unless we can create - Kill keyword 'register' - int -> pid_t for pid - We should use inet_ntoa which returns an ASCII string of representing the - Small message - Make rwhod now use poll(2) instead of signals - Document mined(1) key bindings - Introduce new -g option. This allows for the broadcast time to be reduced - Whoops, some test code slipped in. Remove it - Don't role our own - use err(3) - Fix return values to conform with SUSv3 - Use MAXPATHLEN - Don't flag NIS entries at invalid - If at least one call to onehost() fails, return 1 It's actually 11 minutes when the machine is assumed to be down and removed - State default - In the ICMP debug code, use %d over %x. This makes it much easier when Remove the hack that varied the first character of the output file name - Do not allocate memory to entries in /var/rwho which have been down for Fix a bug introduced which causes chkgrp to coredump on certain group file - Use pidfilei(3) to write a pid file in /var/run - Don't write our own pid file, just use pidfile() Remove undocumented historic support for treating "-" as an option Include the option-arguments in the description and remove a non-existant Minor typo This program requires at least one option. Clean up the checking and report Remove the sleeper crap and use MAXPATHLEN Kill unused variables perror() -> err() Zap unused variable! K&R style function removal. Update functions to ANSI style. Also a few Don't declare a struct just for sizeof(). Just use sizeof(). - Check for execvp returning ENOTDIR - Dont pass a complete interface struct to icmp_error, just pass the mtu. - Update to new icmp_error signature Matthew Dillon (3856): Initial import from FreeBSD RELENG_4: Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most Remove UUCP support. Note: /usr/src/gnu/libexec/uucp and /usr/src/libexec/uucpd Retarget makesyscalls.sh from FreeBSD to TurtleBSD thread stage 1: convert curproc to curthread, embed struct thread in proc. thread stage 2: convert npxproc to npxthread. thread stage 3: create independant thread structure, unembed from proc. thread stage 4: remove curpcb, use td_pcb reference instead. Move the pcb Oops commit the thread.h file. remove unused variable. get rid of (incorrect) gcc warning. Add missing prototype (fixes warning). thread stage 5: Separate the inline functions out of sys/buf.h, creating thread stage 6: Move thread stack management from the proc structure to thread stage 7: Implement basic LWKTs, use a straight round-robin model for thread stage 8: add crit_enter(), per-thread cpl handling, fix deferred rename td_token to td_xtoken to deal with conflict against sys/thread.h Add kern/lwkt_rwlock.c -- reader/writer locks. Clean up the process exit & thread stage 10: (note stage 9 was the kern/lwkt_rwlock commit). Cleanup Add parens for code readability (no functional change) Finish migrating the cpl into the thread structure. minor code optimization. proc->thread stage 1: change kproc_*() API to take and return threads. Note: Optimize lwkt_rwlock.c a bit proc->thread stage 2: MAJOR revamping of system calls, ucred, jail API, proc->thread stage2: post-commit fixes/cleanup proc->thread stage2: post-commit fixes/cleanup(2) proc->thread stage3: make time accounting threads based and rework it for proc->thread stage 3: synchronize ps, top, and libkvm, and add some convenience proc->thread stage 3.5: Add an IO_CORE flag so coda doesn't have to dig proc->thread stage 4: rework the VFS and DEVICE subsystems to take thread proc->thread stage 4: post commit, introduce sys/file2.h. As with other header proc->thread stage 4: post commit cleanup. Fix minor issues when recompiling Introduce cratom(), remove crcopy(). proc->thread stage 5: BUF/VFS clearance! Remove the ucred argument from simple cleanups (removal of ancient macros) cleanup cleanup some odd uses of curproc. Remove PHOLD/PRELE around physical I/O proc->thread stage 6: kernel threads now create processless LWKT threads. Cleanup lwkt threads a bit, change the exit/reap interlock. go back to using gd_cpuid instead of gd_cpu. smp/up collapse stage 1 of 2: Make UP use the globaldata structure the same Give ps access to a process's thread structure. smp/up collapse stage 2 of 2: cleanup the globaldata structure, cleanup format cleanup for readability. Tab out back-slashes. threaded interrupts 1: Rewrite the ICU interrupt code, splz, and doreti code. Implement interrupt thread preemption + minor cleanup. Add 64 bit display output support to sysctl plus convenient macros. Misc interrupts/LWKT 1/2: interlock the idle thread. Put execution of Misc interrupts/LWKT 1/2: threaded interrupts 2: Major work on the Fix a race in sysctl_out_proc() vs copyout() that could crash the kernel. Add threads to the process-retrieval sysctls so they show up in top, ps, etc. Sync userland up with the kernel. This primarily adjusts ps, etc to handle Operations on the user scheduler must be inside a critical section (fixes For the moment uio_td may sometimes be NULL -> nfsm_request -> nfs_request -> Enhance debugging (sync before MP work). Remove pre-ELF underscore prefix and asnames macro hacks. fix unused variable warning Add some temporary debugging. Split the struct vmmeter cnt structure into a global vmstats structure and properly initialize ncpus for UP Split the struct vmmeter cnt structure into a global vmstats structure and Split the struct vmmeter cnt structure into a global vmstats structure and misc quota proc->thread Misc interrupts/LWKT 2/2: Fix a reentrancy issue with thread repriortization Document hardwired indexes for fields. add gd_other_cpus Generic MP rollup work. Integrate the interrupt related operations for /dev/random support fix a bug in the exit td_switch function, curthread was not always being Remove td_proc dependancy on cred that is no longer used. oops, forgot one. Remove another curproc/cred dependancy MP Implementation 1/2: Get the APIC code working again, sweetly integrate the MP Implementation 2/4: Implement a poor-man's IPI messaging subsystem, MP Implmentation 2A/4: Post commit cleanup, fix missing token releases that The syncer is not a process any more, deal with it as a thread. MP Implmentation 3/4: MAJOR progress on SMP, full userland MP is now working! MP Implmentation 3A/4: Cleanup MP lock operations to allow the MP lock to MP Implmentation 3A/4: Fix stupid bug introduced in last commit. MP Implmentation 3B/4: Remove Xcpuast and Xforward_irq, replacing them MP Implmentation 4/4: Final cleanup for this stage. Deal with a race misc cleanup. Add a case where we don't want an idlethread to HLT (if there Cleanup hardclock() and statclock(), making them work properly even though Fix overflow in delta percentage calculation due to the fact that our 32 bit Make the cpu/stat display work properly again. account for the time array being in microseconds now, and allow the partially fix pctcpu and userland rescheduling. We really have to distribute Forward FAST interrupts to the MP lock holder + minor fixes. Collapse gd_astpending and gd_reqpri together into gd_reqflags. gd_reqflags Make the pigs display more meaningful by showing processes which haven't oops, fix bug in last commit, and adjust the p_slptime check. Add missing required '*' in indirect jmp (fix assembler warning). Fix minor buildworld issues, mainly #include file dependancies and fields Fix minor compile warning. The comment was wrong, ptmmap *is* used, put it back in (fix crash GDB changes required for gdb -k kernel /dev/mem. Still selected by Make the kernel load properly recognize ABS symbols (.SET assembly Remove an unnecessary cli that was causing 'trap 12 with interrupts disabled' Nuke huge mbuf macros stage 1/2: Remove massive inline mbuf macros to reduce Remove the priority part of the priority|flags argument to tsleep(). Only Nuke huge mbuf macros stage 2/2: Cleanup the MCL*() cluster inlines by Remove references to the no longer existant PZERO. zfreei->zfree (there is no zfreei anymore) This is the initial implmentation of the LWKT messaging infrastructure. Add externs for *_nonlocked atomic extensions to avoid warning. Profiling cleanup 1/2: fix crashes (all registers need to be left intact doreti was improperly clearing the entire gd_reqflags field when, in Minor cleanups so GENERIC compiles. Fix underscores in assembly and an Fix a minor compile-time errors when INVARIANTS is not defined. DEV messaging stage 1/4: Rearrange struct cdevsw and add a message port DEV messaging stage 1/4: Rearrange struct cdevsw and add a message port LINT build test. Aggregated source code adjustments to bring most of the Throw better sanity checks into vfs_hang_addrlist() for argp->ex_addrlen DEV messaging stage 2/4: In this stage all DEV commands are now being Remove two unnecessary volatile qualifications. LINT pass. Cleanup missed proc->thread conversions and get rid of warnings. LINT cleanup, add a static function back in which I thought wasn't used. Merge from FreeBSD 2003/07/15 15:49:53 PDT commit to sys/netinet. 2003-07-22 Hiten Pandya <hmp@nxad.com> Here is an implementation of Limited Transmit (RFC 3042) for DragonFlyBSD. Add all_sysent target to /usr/src/sys/Makefile to rebuild syscalls. Preliminary syscall messaging work. Adjust all <syscall>_args structures Bring RCNG in from 5.x and adjust config files and scripts accordingly. Fix some stub prototypes (some missed proc->thread conversions). Have MFS register a device as a VCHR instead of VBLK, fixing a panic. Fix NULL td crash in net/if.c when detaching a net interface. libcr copy: Retarget build paths from ../libc to ../libcr and retarget Syscall messaging work 2: Continue with the implementation of sendsys(), Regenerate all system calls Oops, need to update truss's conf files re: new syscall support header Fix a minor bug in lwkt_init_thread() (the thread was being added to the Remove thread->td_cpu. thread->td_gd (which points to the globaldata Performance cleanup. Greatly reduce the number of %fs prefixed globaldata Fix __asm syntax error from previous commit. Fix minor bug in last commit, add the required register keyword back in syscall messaging 2: Change the standard return value storage for system Fix the msgsys(), semsys(), and shmsys() syscalls which were broken by the add missing lwkt_msg to manually specified syscall args structure. fileops messaging stage 1: add port and feature mask to struct fileops and Remove KKASSERT(p)'s, 'p' (curproc) no longer exists in this functions. syscall messaging 3: Expand the 'header' that goes in front of the syscall LINT synchronization, remove remaining manually defined MIN macros. Turn off CHECK_POINTS debugging. Clear the BSS earlier in the boot sequence and clean up the comments. This Fix a pointer bug introduced by syscall messaging. semget() should work Cleanup remaining tsleep priority issues. Remove NOSECURE which no longer serves a purpose. Note: FreeBSD also removed Explicitly use an unsigned index for 'which' in shmsys(), msgsys(), and Fix a compile time error. Rename MAX to RAINMAX and get rid of all the kernel tree reorganization stage 1: Major cvs repository work (not logged as kernel tree reorganization stage 1: Major cvs repository work (not logged as kernel tree reorganization stage 1: Major cvs repository work (not logged as kernel tree reorganization stage 1: Major cvs repository work (not logged as kernel tree reorganization stage 1a: remove -I- for kmod build Set _THREAD_SAFE preprocessor symbol when the -pthread option is used, Kernel tree reorganization stage 2: Major cvs repository work. Fix the buildkernel target. Add a few missing cratom() calls. In particular the code in kern_exec() Add the 'wmake' script to /usr/bin and wmake support to Makefile.inc1. add cvs-site dist (The layout for the dragonflybsd.org web site). Add src/test and throw in some of the ad-hoc timing and testing programs Syscall messaging 4: Further expand the kernel-version of the syscall message. Syscall messaging 4a: LINT build. Remove FBSDID, move $ rcs id back to a comment field. Add softlinks so ports like sysutils/ffsrecov will compile. Eventually Make modules work again part 1: Rename netgraph's Makefile.module to Make modules work again part 1: wire up the module build for bus/ Make modules work again part 1: linkup emulation/ and change the architecture Make modules work again part 1: linkup "net" and rename Makefile.module files Make modules work again part 1: linkup vfs, rename Makefile.module files, Make modules work again part 1: linkup crypto. Make modules work again part 1: forgot a file in 'emulation'. master makefile for netproto modules, of which there is exactly one Standardize the syscall generation target to 'sysent'. Make modules work again part 1: hook up 'dev'. Note that not all devices Fix compile-time error when compiling a profiled GENERIC. Make modules work again part 2 (final): Link the module build back into the Add missing Makefile.modules (forgot to add it in the last commit) The make release process tries to stat/open a non-existant device, which Linux needs %edx to be 0 on entry. It registers it as an atexit function if Linux needs %edx to be 0 on entry. It registers it as an atexit function if Set SYSDIR properly for sys/crypto and deep sys/dev/disk/... drivers by Silence warnings about _POSIX_C_SOURCE not being defined. This occured when Fix minor bug in last commit. Temporarily go back to an absolute path for signal.h's #inclusion of trap.h Separate system call generation out from the Makefile so kernel and Modify the header dependancy ordering so the bsd.kmod.mk/bsd.prog.mk Adjust an #include line to fix a build error. Reintegrate the module build for usb.ko Allow a NULL dev to be passed to _devsw(). This should close any remaining Separate out userland copyin/copyout operations from the core syscall code Add an alignment feature to vm_map_findspace(). This feature will be used Remove now unused proc p variables. Add the 'eatmem' utility. eatmem is a memory stressor. Add the NO_KMEM_MAP kernel configuration option. This is a temporary option Remove unnecessary KKASSERT(p) checks for procedures which do not need Remove additional KKASSERT(p)'s that we do not need (fixes lint) SLAB ALLOCATOR Stage 1. This brings in a slab allocator written from scratch A significant number of applications need access to kernel data Adjust our kvm based utilities to use the _KERNEL_STRUCTURES define Cleanup some debugging junk and fix a bug in the M_ZERO optimization code. oops. Forgot a commit. At Jeffrey Hsu's suggestion (who follows USENIX papers far more closely the Fix sendfile() bug introduced by the message passing code. The call to req->r_td can be NULL, remove KKASSERT() and fix check. cleanup: remove register keyword, ANSIze procedure arguments. Do a bit of Ansification, add some pmap assertions to catch the General cleanup, ANSIfication, and documentation. Prep work for the VFS Adding missing '|' from last commit. Make bind1() and accept1() non-static for linux emulator use. Adjust linux emulation calls for bind and listen to use the new broken-out Make bind1() and accept1() non-static for linux emulator use. Convert DIRECTIO code to DragonFly. Add DIRECTIO to LINT Cleanup some broken #include's for VESA. Add the relpath utility which /usr/share/mk/bsd.port.mk will use to Add bsd.dport.mk, which Makefile's in /usr/dports will use instead of add relpath to the build Makefile Add UPDATING note on /usr/dports rename /usr/dports to /usr/dfports. Includes some CVS surgery. rename dport -> dfport Add a cvsup tag, cvs-dfports Cleanup. Remove unused variable. Change argument from proc to td. Remove unused thread pointers. Permanently fix the 'allocating low physmem in early boot' problem which update version to match FreeBSD after buffer security fix. Additional comments: ssh may attempt to zero and free the buffer from Fix a DIAGNOSTIC check in the heavy-weight switch code. A thread's process is Additional ssh patches relating to the same fatal() cleanup issue. They Import prebinding code into DragonFly, based on Matthew N. Dodd's The twe driver requires all requests, including non-data requests, to be Add notation on the alignment requirement for the twe driver. namecache work stage 1: namespace cleanups. Add a NAMEI_ prefix to Clean up thread priority and critical section handling during boot. The oops, remove some namecache leakage that is not ready for prime time yet. Cleanup td_wmesg after a tsleep completes for easier debugging. Apply FreeBSD Security Advisory FreeBSD-SA-03:14.arp. Fix DOS crash due Do not attempt to access kernel_map in free(). It's a bad idea, and doubly Fix a number of mp_lock issues. I had outsmarted myself trying to deal with Allow unlock and non-blocking lock operations from FAST interrupts. Remove the NO_KMEM_MAP and USE_SLAB_ALLOCATOR kernel options. Temporarily Remove the NO_KMEM_MAP and USE_SLAB_ALLOCATOR kernel options. Temporarily Try to generate more debugging information when the critical nesting count namecache work stage 2: move struct namecache to its own header file and Hash the link-check data to improve performance on filesystems containing a Cleanup: get rid of the CNP_NOFOLLOW pseudo-flag. #define 0'd flags are a Move pst from dev/misc to dev/raid, add a Makefile for 'pst'. Add pst to the raid Makefile Move the pst device from dev/misc to dev/raid. Fix a negative cache entry reaping bug, cache_zap() expects ncp's ref count Fix a bug in lwkt_trytoken(), it failed to exit its critical section on Describe the hw.physmem loader variable. correct a comment. Fix a number of interrupt related issues. Addendum: Many thanks for continuing to Galen Sampson for running such an The CMOV family of instructions do not work across all cpu families. In The splash_bmp and splash_pcx builds were missing some symbols, add the Disable background bitmap writes. They appear to cause at least two race Add _KERNEL_STRUCTURES support for userland use of this header file. Define _KERNEL_STRUCTURES instead of _KERNEL to get just the Use the _KERNEL_STRUCTURES define to allow userland to bring in kernel Sync with FreeBSD-5, fix-up the use of certain flags variables. namecache work stage 3a: Adjust the VFS APIs to include a namecache pointer Data reads and writes should not need credentials, and most filesystems add bsd.dfport.pre.mk and bsd.dfport.post.mk, part of the DragonFly ports Upgrade isc-dhcp, e.g. dhclient. Add a -p option to fdisk that allows it to operate on normal files which document the new fdisk option. Fix bugs introduced from the last commit. The loadav routine somehow got More hacks to support DragonFly port overrides. Deal with ports which Add a splash example / DragonFly BMP add /usr/share/examples/splash to the mtree, for installworld. Add bsd.dfport.pre.mk and bsd.dfport.post.mk to the Makefile Augment falloc() to support thread-only file pointers (with no integer file Extend NSIG to 64 and introduce a registration function for the checkpointing Use the one remaining free termios control character slot for a tty Fix miscellanious kern_fp.c bugs. Start separating the ucred from NDINIT. Oops, I gave Kip bad advise. The checkpoint execution code is supposed Add checkpoint tty signaling support to stty and tcsh. The signal Make sure PORTSDIR is not in the environment when bootstrapping a /bin/sh needs to use sys_nsig, not NSIG, when accessing static arrays NSIG is now 64. Extend the siglist arrays to match. Update the sys_sig* manual page to include portability suggestions. Fix an inherited bug in ptrace's PT_IO. The wrong thread was being assigned Have lwkt_reltoken() return the generation number to facilitate checks correct the .PATH for access to scvesactl.c Fix the userland scheduler. When the scheduler releases the P_CURPROC Remove PUSER entirely. Since p_priority has nothing to do with kernel Cleanup P_CURPROC and P_CP_RELEASED handling. P_CP_RELEASED prevents the Fix a uni-processor bug with the last commit... we weren't rescheduling on Make malloc_type statistics per-cpu, which fixes statistics update races Fix memory leaks in the namecache code commited so far to stabilize its Make vmstat -m understand the new per-cpu aggregation for machine/param.h has to be included outside of _KERNEL for MAXCPU. Fix races in ihashget that were introduced when I introduced the Remove a diagnostic message that detected PHOLD() vs exit()/process-reaping Don't try to call an ipiq function with a UP build when the cpuid doesn't Hack up param.h even more so sys/socket.h can include portions of it without Entirely remove the old kernel malloc and kmem_map code. The slab allocator Fix the INVARIANTS memuse test, the memuse for each cpu must be aggregated Enhance the fp_*() API. Reorganize the ELF dump code using the fp_*() Add a fp_vpopen() function to kern_fp.c, and add reserved fields to Add the 'checkpt' utility to support the process checkpoint module. This Add the 'checkpt' utility to support the process checkpoint module. This Additional checkpoint suppor for vmspace info. In particular, the data size Add the checkpt module to the system tree. Currently this may only be Do a bit of cleanup and add a bit of debugging to the checkpoint module. Fix an error message. Fix a bug introduced by recent INVARIANTS debugging additions, sometimes Yah yah. Always good to commit the header file needed for the last fix. A hackish fix to a bug uio_yield(). Fix bug in last commit, flags were not being passed to fo_write() which Use sys_nsig instead of NSIG so libc-suspplied arrays match up. Add support for SIGCKPT to csh/tcsh's internal 'kill' command. MSGF_DONE needs to be cleared when a message is to be handled asynchronously. Hook i386/vesa up to the module build Add SIGCKPT support to tcsh's built-in kill. This is FreeBSD 5.x's code to dump the kernel's identifier through Honor MODULES_OVERRIDE if defined. Fix type-o's, minor documentation update. Interrupt threads could block in free waiting for kernel_map(). Add a Add a DEBUG_INTERRUPTS option for debugging unexpected (trap 30) interrupts. Simplify the lazy-release code for P_CURPROC, removing the TDF_RUNQ Backout last commit, oops! SIGCKPT had already been added and it was redundant. Fix a bug introduced when I redid the stray interrupt arary. The Deal with multicast packets in a manner similar to Solaris, RFC 3376, and Better documentation of the MP lock state for new threads. Augment the LWKT thread creation APIs to allow a cpu to be specified. This 64 bit address space cleanups which are a prerequisit for future 64 bit Do a minor update of Groff to deal with the addition of intmax_t and Do a minor update of getconf to deal with the addition of intmax_t and Fix a minor compile-time bug introduced in 1.22 when DEBUG_VFS_LOCKS is Fix the pt_entry_t and pd_entry_t types. They were previously pointers to Give up trying to do this the clean way and just hack groff to remove Fix LINT issues with vm_paddr_t Fix a translation bug in the last commit. The second status translation Add a missing element related to the dev messaging changes. Get rid of the intmax_t dependancies and just specify the maximum add cmpdi2 and ucmpdi2 to conf/files to fix LINT build. Change ui_ref from an unsigned short to an integer. A 16 bit ref count is Cleanup the ui*() API in preparation for the addition of variant-symlink Variant symlink support stage 1/2: Implement support for storing and retrieving An additional ui*() API cleanup that I missed. Add the 'varsym' utility which may be used to manipulate system-wide and Remove named arguments in the varsym prototypes which conflict with not very Add 3c940 ethernet driver support ('sk' driver) for ASUS K8V (AMD64) If a panic occurs from a BIOS call (16 bit mode) or VM86 DDB will attempt Fatal traps were not reporting the correct %cs (code selector) for the SMP sync from FreeBSD-5.x. No operational changes. Get rid of a .byte macro Synchronize APMBIOS with FreeBSD-5.x bios.c 1.64. The primary change is Import a good chunk of the PNP code from FreeBSD-5.x Hopefully do a better job disassembling code in 16 bit mode. Document critical section and IPIQ interactions in the message port Network threading stage 1/3: netisrs are already software interrupts, Update the systat manual page to reflect the new -ifstat option. Core integer types header file reorganization stage 1/2: Create and/or modify Core integer types header file reorganization stage 2/2: Add a wmakeenv target for use by the wmakeenv script to aid in debugging Initialize base unconditionally to get rid of a bogus compiler warning. Add prototypes for __cmpdi2() and __ucmpdi2(), get rid of compiler warnings. Add a pmap_getport_timeout() call hack that allows rpc.umntall to control Add the -f 'fast timeout' option to rpc.umntall. Using this option will Add the -f option to the rpc.umntall call to reduce boot-time stalls when Fix minor snafu in last commit. machine/stdarg provides __va* macros, not va* macros, use <stdarg.h> instead 3c940 bug fix update. This patch simplifies IFF_PROMISC checking from Implement variant symlinks! In order to use variant symlinks you must first Catch attempts to queue to unregistered ISRs Fully synchronize sys/boot from FreeBSD-5.x, but add / to the module path Fully synchronize sys/boot from FreeBSD-5.x, but add / to the module path Fully synchronize sys/boot from FreeBSD-5.x, but add / to the module path Add a temporary workaround to prevent a panic due to a truss related problem. Add buffer bounds checking during check pointing since it is possible for Document the edx assignment to fds[1] What happens when you mod a negative number? Mask off the hash value before The last major syscall separation commit completely broke our lseek() as well Add the varsym_list() system call and add listing support to the varsym Add the varsym_list() system call and add listing support to the varsym Most code that calls vm_object_allocate() assumes that the allocation will Prep for GCC 3.x kernel compiles, stage 1/2: Remove bad C syntax including Add Marvell chipset support. Adding the PCI ID is all that was needed. cleanup extra )'s missed by the __P commit. The MBWTest program (/tmp/mbw1) attempts to figure out the L1 and L2 Refuse to load dependancies from the root filesystem during early boot when Bring in the MODULE_DEPEND() and DECLARE_MODULE() macros from FreeBSD-5.x. MT_TAG mbufs are terrible hacks and cannot be freed. Skip any MT_TAGs Remove __P() macros from include/ __P removal. Fix bug in last syscall separation commit, an extra semicolon was causing Add a make variable listing obsolete header files that need to be removed Prep for GCC 3.x kernel compiles, stage 2/2: Remove bad __asm embedded-newline Move bsd.kern.mk from /usr/src/share/mk to /usr/src/sys/conf and Misc cleanups to take care of GCC3.x warnings. Missing 'U' and 'LL' Correct bugs introduced in the last syscall separation commit. The panic() normally tries to sync. Add a sysctl to control the behavior in Fix a bug in the last fix to a prior locking issue. A vdrop() was Cleanup the linux exec code to match other recent fixes to the exec code. Enable Conrad parallel port radio clock support in ntp by default. No reason Cleanup aux args and 32-byte align the initial user stack pointer. Note that Fix a missing backslashed in the 1.9 commit. clarify the solution for typical build snafus in UPDATING. Backout part of 1.16. It is not necessary to align the stack at this Correct several bugs. If we fail to add a device be sure to delete its kobj. Use M_ZERO instead of manually bzero()ing memory allocated with malloc(). Sync TAILQ_FOREACH work from 5.x. The closer we can get this file to 5.x Turn off CHECK_POINTS in mpboot.s. It was accidently left on which resulted RC cleanups and minor bug fixes to support the uname change to DragonFly Add a .makeenv directive to make, which allows variables to be exported to This file needs sys/cdefs.h for __ extensions and such (similar to what Change the system name from 'FreeBSD' to 'DragonFly'. We are now officially Change the system name from 'FreeBSD' to 'DragonFly'. Additional commits Add syscall4 (/tmp/sc4) perf test. This tests stat() overhead instead of Redo the 'upgrade' target. Get rid of the old elf stuff and change the Change the system name from 'FreeBSD' to 'DragonFly'. Change the system name from 'FreeBSD' to 'DragonFly'. Adjust nfs module loading to use nfs.ko (4.x style) rather then the This is a major cleanup of the LWKT message port code. The messaging code Add a DECLARE_DUMMY_MODULE() so we can get linker_set module names Add a DECLARE_DUMMY_MODULE for snd_cmi to detect kld/static-kernel conflicts. Implement an upcall mechanism to support userland LWKT. This mechanism will Add a big whopping manual page for the upcall syscalls, upc_register and Add UPC_CRITADD, the value that crit_count must be increased or decreased Add an upcall performance test program + example context function Add some comments to the upcall test code. Tweak the context data a bit and do some code cleanup. Save %edx as well 'Building databases' has 10 seconds worth of sleeps that it doesn't need. Do some fairly major include file cleanups to further separate kernelland When looking for files that have already been linked, strip off any Make the 'bad isr' panics a little more verbose. NETISR_POLL cannot use isr 0. Use isr 1. Fix the OID_AUTO collision with static sysctl numbers. This can occur More LWKT messaging cleanups. Isolate the default port functions by making Add a new library, libcaps, which is intended to encompass DragonFly-specific Add /var/caps/root and /var/caps/users. IPC rendezvous services for root Set the close-on-exec flag for CAPS client descriptors. #include cleanups for lwkt_msgport.c and lwkt_thread.c... the committed Cleanup POSIX real-time so the kernel compiles without the P1003_1B Fix a DOS in rfork(). Disallow kernel-only flags. Fix bug in last commit (missing ..) Add the MPIPE subsystem. This subsystem is used for 'pipelining' fixed-size Add cpdup to /bin (not /usr/bin), so we can start using it during boot Add the -C option to mount_mfs. This option will automatically copy the Documentat mount_mfs -C Add a missing PRELE() when the mfs_mount kernel process exits. Because Do not require -i0 when -o is used. When attempting to open a file path do not treat a file that appears as a Introduce /usr/src/nrelease which begins to implement the new 'live CD' Fix a minor bug... install cvsup and mkisofs in the correct Temporary disable the ports checks.. the Makefile has to be runnable from Add nreleae back in. It's a dummy target designed only to ensure that Use mkiso -R instead of -r in order to properly record file modes. In Add a README file, rc.conf, and example fstab. Disable a number of system After testing the manual installation instructions on a real box make some Add additional information on cvsup'ing various sources. Add chmod 1777 for /tmp and make other minor adjustments. As part of the libcaps threading work a number of routines in lwkt_thread.c General cleanups as part of the libcaps userland threading work. Augment the upcall system calls to support a priority mechanism rather then Major libcaps work to support userland threading. Stage 1/2. Minor cleanups to sysport. Use ms_msgsize in the sendsys() call. Modify the upcall code to access the critical count via the current thread Adjust a comment. Add additional functionality to the upcall support to allow us to wait for Major update to libcaps. Implement support for virtual cpus using Thread testing code for libcaps. Convert alpm to use the devmethod code. PCI compat cleanup, part 1. This brings in the LNC and VX drivers This patch adds a bunch of stuff from FreeBSD5. It consistantly makes Add strlcpy and strlcat to libkern Add libc support for propolice. See: Add -fstack-protector and -fno-stack-protector support to GCC. Note Add -fstack-protector support for the kernel. More cleanups to make ports work better. Do not print out error messages in quiet mode to make shell scriptiong Add the -o file option to rcorder. This will cause rcorder to generate Add a -p option to rcorder which generates the PROVIDE keywords for the Adjust rc.subr to generate system-wide varsyms tracking RCNG operations. Add /sbin/rcrun and symlinks rcstart, rcstop, rcrestart, rcvar, rclist, Oops, forgot the rc.subr update Make a distinction between disabled entries and running entries. Do not Be smarter about services which are in a disabled state. If the Add another special return code, 3, which means 'subsystem is not Minor grammatical fix. Add an 'enable' and 'disable' target to rcrun, and add softlink shortcuts use the proper $RC_ variable when returning irrelevancy. Make savecore return RC_CONFIGURED unconditionally. Support multicast on the Marvell Yukon Chipset. The GMAC on the Yukon Fix a memory leak that occurs when an attempt is made to checkpoint USER_LDT is now required by a number of packages as well as our upcoming Fix a syscall separation bug in recvfrom() which sometimes dereferenced Minor syntax cleanup. Patch to make the P4S8X run in ATA100 mode. This is a compromise, since Add a p_sched field to accomodate Adam K Kirchhoff's scheduler work. This The attribution for the last commit was incorrect, it should have been: Try to catch mbuf cluster mclfree list corruption a little earlier with Backgroundable NFS mounts can still cause a boot sequence to stall for FreeBSD-5.x removed the 'read interrupt arrived early' check code, for Most motherboards these days have at least two USB controllers. Adjust Synchronize the USB, CAM, and TASKQUEUE subsystems with FreeBSD RELENG_4. Fix the peripheral list scan code, which was broken when the new linker set Primarily add a missing lwkt_reltoken() in ntfs_ntput(), plus a little minor syntax cleanup Get rid of individual /* $xxx$ */ comments entirely (which was an artifact Add missing prototypes in last commit, other minor adjustments. Add support for cam_calc_geometry(), from FreeBSD-5. Bring in the entire FreeBSD-5 USB infrastructure. As of this commit my These files have been moved to bus/usb. Add idle entry halt vs spin statistics counters machdep.cpu_idle_*, Cleanup a DIAGNOSTIC test so LINT compiles. Bump the network interface cloning API to what is in 5.x with the following Get EHCI to compile. This will make USB2.0 work when 'device ehci' is Rearrange an INVARIANTS overflow test so it works with more immediacy. Fix a bug: remove an extra crit_enter() in the default waitport/waitmsg Remove proc related extern's from sys/proc.h that are now static's inside When you attempt to kldload snd_sb16 and then unload it, the system crashes. Import the libkern/fnmatch code from FreeBSD-5. Maintain the fnmatch() API from FreeBSD-5 using an inline, rename the if_xname support Part 1/2: Convert most of the netif devices to use if_xname support Part 2/2: Convert remaining netif devices and implement full if_xname support Part 2b/2: Convert remaining netif devices and implement full Add the -r option to set the hostname based on a reverse lookup of an IP bktr depends on bktr_mem. Add a KMODDEPS line. *** empty log message *** Compensate for the frequency error that occurs at higher 'hz' settings. tvtohz() was originally designed for tsleep() and timeout() operations but note last commit based on evidence supplied by: Paul Herman <pherman@frenchfries.net> Fix a bug introduced in the last commit. When calculating the delta count Add necessary critical sections to microtime() and nanotime(). Make the phase synchronization of the hz clock interrupt (I8254 timer0) Arg3 to kern_fcntl was incorrectly passed as NULL, causing Minor corrections to the documentation. ANSIfy procedure arguments. Add a missing thread pointer to a busdma call that needs it. Cleanup the vm_map_entry_[k]reserve/[k]release() API. This API is used to npx_intr() expects an interrupt frame but was given something inbetween an Fix a long-standing bug in protocol 2 operation. The client will top sending minor syntax cleanups (non-operational changes). scrap stderr from the ps output to avoid an annoying warning (due to Major GCC surgery. Move most compiler-specific files into named Augment the upgrade_etc target to remove stale compiler-related binaries Fixup /usr/libexec/gcc2/aout binutils generation. The install targets had Adjust the upgrade target to remove stale /usr/libexec/aout files now Handle recursive situations a bit more robustly by adding a second reference non operational change. vm_map_wire() now takes a flags argument instead Fix an off-by-one error in fdfree() related to flock/fcntl unlock on Fix reversed snprintf arguments. CAPS IPC library stage 1/3: The core CAPS IPC code, providing system calls CAPS IPC library stage 2/3: Adjust syscalls.master and regenerate our vm_uiomove() is a VFS_IOOPT related procedure, conditionalize it Add test programs for the new caps system calls. Temporarily disable the move the caps_type enum so it is accessible through both user and kernel Fix a minor malloc leak in ips. Add and document an example disklabel file for the ISO Fix a panic if -i is used on an interface which does not have an IP. Retool the M_* flags to malloc() and the VM_ALLOC_* flags to Resident executable support stage 1/4: Add kernel bits and syscall support Resident executable support stage 2/4: userland bits. Augment rtld-elf Add sys/resident.h for userland syscall prototypes, and give the unregister Resident executable support stage 3/4: Cleanup rtld-elf and augment and Resident executable support stage 4/4: remove prebinding support. Resident executable support stage 4/4: cleanup options. -R and -x id now It really is supposed to be CAPF_ANYCLIENT. Follow FreeBSD's lead and remove a test utility which is not under a free Fix the bandwidth specifier once and for all. Properly distinguish Use the fliesystem block size instead of BUFSIZ when the client sends files. Fix some type-o's and add some info on things to try if your initial attempt Add a 'dd' prior to the first fdisk to cover situations where people are Get rid of embedded newline continuation entirely and switch to an ANSI Assume read access for execute requests so we can fill in the read credential Synch the XL driver with FreeBSD-4.x. Turn off hardware assisted transmit checksums by default. In buildworld When creating a new uidinfo structure, check for an allocation race whether Cleanup the duel-macro specifications in sioreg.h and ns16550.h by having Prevent killall from killing itself. Fix a serious bug in the NTPD loopfilter. Basically what happens is that USB storage devices are standard fare these days, add device atapicam to Fix type-o's and add a few files to the cleanup list. Describe variant symlinks in the 'ln' manual page. func() -> func(void) style. Try to work-around a DFly-specific crash that can occur in ufs_ihashget() Fix the DDB 'trace' command. When segment address adjustments were added Undo some of the previously made changes to deal with cross build issues Rename Makefile.sub back to Makefile.inc to fix cross builds. Rename .sub files back to .inc. Remove the remainder of Makefile.sub handling. Make objformat aware of the have upgrade / upgrade_etc remove /usr/libexec/gcc2 now as well, it has Add back a directory path that buildworld needs, remove /usr/libexec/gcc2 oops, undo part of the last commit. cpp, f77 need their gccX frontends in Set a variable indicating that we support .makeenv, so we can conditionalize This should hopefully fix current issues with bootstrap buildworlds from This commit represents a major revamping of the clock interrupt and timebase This commit represents a major revamping of the clock interrupt and timebase This commit represents a major revamping of the clock interrupt and timebase Clean up the sys_nsig verses NSIG code to handle additional fault Update miscellanious firewire manual pages from FreeBSD-4.x. Remove limitations on the 'machine' specification. MAINTAINER lines in Makefile's are no longer applicable, remove them. Remove genassym and gensetdefs. These programs are obsolete and no longer remove genassym and gensetdefs in the upgrade_etc target. Make gcc3 the default for non-i386 architectures, leave gcc2 the default binutils214 stage 1/4. Bring in the build infrastructure (left untied from Don't use wildcards in csh expansions to avoid early termination of binutils214 stage 2/4. Merge from vendor branch BINUTILS: Bring GNU binutils-2.14. This commit is an exact copy of the contents of binutils214 stage 2/4 (continued). remove /usr/bin/gcc{2,3} from AMD64 infrastructure stage 1/99. This is just a preliminary commit. Many Update the GCC3 infrastructure Stage 1/2. This commit generates the basic Undo the xmalloc->malloc optimization FreeBSD made in certain cases Adjust __FreeBSD__ -> __DragonFly_ Make the 'realclean' target respect KERNCONF. Add -DSETPWENT_VOID to CFLAGS to work around a __FreeBSD__ conditional in Adjust osreldate.h to only set __FreeBSD_version if __FreeBSD__ is set, Fake __FreeBSD__ for various contrib/ code that needs it. Change __FreeBSD__ -> __DragonFly__ Remove obsolete __FreeBSD_version checks, add __DragonFly__ tests to Try to clean up more of GCC's xrealloc and xmalloc wrapper mess so Fake __FreeBSD__ for sendmail contrib use. This should fix the dependancy loop for sure. More __FreeBSD__ -> __DragonFly__ translation Add some basic in-pipeline instruction timing tests. Instruction timings Add a locked-bus-cycle add to memory test Fix a DFly buildworld from 4.x issue. Only set HAVE_STDINT_H for Split the lwkt_token code out of lwkt_thread.c. Give it its own file. devsw() does not exist in DFly. use dev_dflags() to extract d_flags. The logical pci busses must attach to the physical pci bridges using the Convert mbuf M_ flags into malloc M_ flags when calling malloc(). Use a globaldata_t instead of a cpuid in the lwkt_token structure. The isa_wrongintr() cannot depend on the (void *) unit argument pointing to activate any tick-delayed software interrupts in the per-cpu hardclock sys/dev __FreeBSD__ -> __DragonFly__ cleanups. Change lwkt_send_ipiq() and lwkt_wait_ipiq() to take a globaldata_t instead Convert __FreeBSD__ tests to __FreeBSD__ and __DragonFly__ tests Add -D__FreeBSD__ for buildworld (vacation pulls source files from Use "iq" instead of "ir" for the register constraint. "iq" means 'an bio ops can be initiated from the buffer cache, td_proc may be NULL here. Oops. Forgot to renumber the %N's in the macros in the last commit. Rewrite the IP checksum code. Get rid of all the inline assembly garbage, Fix forgotten lwkt_send_ipiq() API update. cpuid -> globaldata pointer C99 specify ary[] instead of ary[0] in structure. gcc2 doesn't like ary[] inside structures. Add __ARRAY_ZERO to Add PCI busses to the device list in bus number order to make debug output Actively manage cam rescan timeouts in the usb subsystem. This does not Change M_NOWAIT to M_WAITOK. This does not fix any known issues but it Get rid of some old cruft and add a failsafe for M_WAITOK which guarentees Create a new machine type, cpumask_t, to represent a mask of cpus, and Move <machine/in_cksum.h> to <sys/in_cksum.h>. This file is now platform Remove old in_cksum.c and in_cksum.h (they were moved to netinet and sys atomic_*_*_nonlocked() inlines are not MP locked. Remove the MPLOCKED Split the IPIQ messaging out of lwkt_thread.c and move it to its own file, Cleanup and augment the cpu synchronization API a bit. Embed the maxcount Install new .mk files (/usr/src/share/mk) as part of the upgrade_etc target. Make buftimetoken an extern so it is not declared as a common variable. buftimetoken must be declared in a .c file. Remove duplicate declarations in preparation for adding -fno-common to Remove duplicate bioops declaration in preparation for -fno-common Add a dependant include. Remove common variable to get ready for -fno-common. Compile the kernel with -fno-common to guarentee that we do not accidently Remove common declaration for -fno-common Remove duplicate declarations for -fno-common Remove duplicate declarations for -fno-common Undo part of the last commit. Part of a different patch set leaked into it Introduce an MI cpu synchronization API, redo the SMP AP startup code, (followup) remove lockid. ATAng stage 1: synch ad_attach() and atapi_attach(), including a fix for ATAng stage 2: sync part of the ata_dma*() API. No operational changes. ATAng stage 3: sync additional atang from 4.x, mostly non-opertional changes, ATAng stage 4: sync additional atang from 4.x, all non-operational changes ATAng stage 5: sync additional function API changes from FBsd-4. We now ATAng stage 5: sync chipset changes and bug fixes. busdma is not synched yet. Add experimental (as in hacked) support for the Silicon Image SATA ATAng stage 6: Comment-only. Many thanks to David Rhodus for generating Merge vfs/ufs/ufs_disksubr.c into kern/subr_disk.c. The procedures in RCNG, shutdown ppp connections nicely when told to. M_NOWAIT work stage 1/999: Fix some boot-time misuses of M_NOWAIT -> M_WAITOK Implement a pipe KVM cache primarily to reduce unnecessary TLB IPIs between Synchronize a bunch of things from FreeBSD-5 in preparation for the new Synchronize a bunch of things from FreeBSD-5 in preparation for the new Bring in the FreeBSD-5 ACPICA code as a module. Note: not hooked up yet, Bring in acpica-unix-20031203. As with other contrib imports, this import Fix a bug in the last commit. 4.x improperly tries to add the children Include the required machine/bus.h if we do not already have it. Bring in additional stuff from FreeBSD-5, fixing some issues (fwohci not INTR_EXCL moved to sys/bus.h, add #include. Remove static resource_disabled(), the function is now supplied by Make nexus understand the new INTR_ flags, mainly INTR_ENTROPY. Newtoken commit. Change the token implementation as follows: (1) Obtaining Partitions>8: Do not hardwire partition limit at 'h'. Partitions>8: Leave a whole 512 bytes for the disklabel and squeeze code Partitions>8: Increase the number of supported partitions from 8 to 16. Correct bug introduced in last commit. Simplify LWKT message initialization semantics to reduce API confusion. Bring libcaps in line with recent LWKT changes. Additional CAPS IPC work. Add additional system calls to allow a CAPS Adjust the caps client/server test code to match new CAPS features. The The sys/xxx2.h files are supposed to be included after all the normal get rid of thr{1,2,3}, which are obsolete at the moment. Keep the Initial CAPS IPC structural encoding and decoding support. Note that the libcaps now compiles ipiq and token in userland, make those files compile ANSIfy the tsleep() and sched_setup() procedure definitions. Config cleanup part 1/3: Remove old style C cruft and cleanup some variable Config cleanup part 2/3: Remove old style C cruft. Config cleanup part 3/3: Remove the ns() and twisty eq() macros and replace The "Don't forget to do a ``make depend''" warning no longer serves any Add -B to display buffer limits instead of current buffer usage. Increase the default socket buffer for NFS to deal with linux bugs and to Addendum comment to last commit. Jeffrey Hsu reminded me that kernel writes ncpus2 must be initialized to 1 in the UP case. ncpus2_mask and ncpus2_shift Allow the nominal NFS io block size to be set with a sysctl vfs.nfs.nfs_io_size Explain some of the instruction details in more depth. ANSIfication, convert K&R procedure declarations and remove 'register'. dislabel -> 16 partitions work addendum: MAKEDEV now understands and Minor documentation adjustments. Allow %e, %E, %f, %g, %G formats to work without producing an error code. ANSIfication, remove 'register' and 'auto' use, convert K&R procedure decls. gdb was unable to obtain backtraces of pure threads. Change the 'proc' Change M_NOWAIT to M_INTWAIT or M_WAITOK. CAM does a mediocre job checking Use M_INTWAIT and M_WAITOK instead of M_NOWAIT within the USB bus In an rfork'd or vfork'd situation where multiple processes are sharing Fix a bunch of NFS races. These races existed in FreeBSD 4.x but are more The cam_sim structure was being deallocated unconditionally by device Do some M_WAITOK<->M_INTWAIT cleanups. Code entered from userland, such as The cam_sim structure was being deallocated unconditionally by device When deregistering a bus, pending device bus scan timeouts are not deleted When detaching UMASS, abort all the pipes before detaching the sim. Note Do not free the old VFS vector when recomputing the vectors. If a module Code cleanup, remove 'register' and (void) casts. No functional changes. Move a drain output call to before a dirty block check instead of after Partial merge from FBsd-5, code to make some PCCARDs work under NEWCARD. Add a Makefile stub to build pccard as a module. Add a Makefile stub to build ep as a module. ANSIfication, K&R cleanups, 'register' removal. The kernel realloc() does not support M_ZERO, assert the case. nextsoftcheck (which is a really aweful interrupt interlock hack) needs to Separate chroot() into kern_chroot(). Rename change_dir() to checkvp_chdir() main() more typically uses 'char **argv' instead of 'char *argv[]'. Remove An strlcpy() in the last commit was unconditionally overwriting 'name' Fix a bug in the recent connectionless commit. When sending a UDP packet Update the 825xx GigE support. Add a large number of new device id's and Create /usr/src/test/test ... a dummy directory that new committers can Merge FreeBSD ifconfig.c rev 1.94, strlcpy() cannot be used if the source Separate the pieces a bit. The usb.ko module does not build EHCI, and Add some missing SRCS dependancies to the EHCI module. Add some USB specific documentation to make life easier for people Add additional default-generation entries for usb2-5, most new PCs these Repository copy libkern/{Makefile,iconv*} -> libiconv, and modify the Adjust the Makefile's to move the iconv files to libiconv, and add it to Bring in MODULE_VERSION from FreeBSD-5. Even though our kernel doesn't Fix generation of USB_EVENT_DEVICE_DETACH, which was commented out in Incorporate NetBSD rev 1.111: Set the device address before reading the If XL cannot properly attach it tries to detach to clean up. Unfortunately, Make ALWAYS_MSG the default. This introduces an extra procedural call Get rid of the obsolete SMP checks in SMBFS. Add function call overhead tests for (1) direct calls, (2) indirect calls, Add some additional spaces so the ctl string does not bump the 64-byte-align the test functions so they appear on different cache lines. Fix p_pctcpu and p_estcpu. When the new systimer stuff was put in the Fix p_pctcpu and p_estcpu (addendum). Add a ESTCPUFREQ and set it to 10hz. Add a missing resetpriority() which was causing all newly forked processes No changes. force commit / update timestamp so ioctl.c is regenerated. correct a buildworld failure, fix the include file filter to allow the Undo part of the last commit. OBJFORMAT_PATH controls how the cross A large number of targets were doing a mkdir -p openssl. A parallel make grr. fix bug in last commit. Use .ALLSRC instead of .OODATE This represents a major update to the buildworld subsystem. buildworld subsystem update addendum. Hopefully fix buildkernel, Another attempt to fix make -j N issues with this subdirectory. Add hexdump and kill to the bootstrap list. Correct the tools path used Add 'route show' to the route command, plus style cleanups. Make the Destination and Gateway columns wider when printing FQDNs so Add -w, which prints the full width of the data being represented even if ANSIfication (procedure args) cleanup. make -j N support, add a required dependancy. Fix malloc semantics, M_NOWAIT->M_WAITOK. agp_nvidia.c was not being linked into the module build. Fix malloc semantics (M_NOWAIT->M_INTWAIT/M_WAITOK). Return a low priority for the "hostb%d" catch-all for pci busses, which will make -j N support, the generated lib_gen.c and nomacros.h files depend Fix a missing makewhatis related change so buildworld works again. The NXCC (native C compiler) misnamed OBJFORMATPATH, it neesd to be Make the .nx/.no native program helper binaries work and add some missing Protect v_usecount with a critical section for now (we depend on the BGL), Do some major performance tuning of the userland scheduler. Import Alan Cox's /usr/src/sys/kern/sys_pipe.c 1.171. This rips out Add the pipe2 sysperf test. This test issues block writes from parent to Initialize the pcpu clocks after we've activated the cpu bit in Generally bring in additional sf_buf improvements from FreeBSD-5. Separate Bring in a bunch of well tested MPIPE changes. Preallocate a minimum UDF was not properly cleaning up getblk'd buffers in the face of error Count vnodes held on the mount list simply by using the Protect the mntvnode scan for coda with the proper token. Since we do not Second major scheduler patch. This corrects interactive issues that were Initial XIO implementation. XIOs represent data through a list of VM pages Hook XIO up to the kernel build. Change CAPS over to use XIO instead of the vmspace_copy() junk it was using Trash the vmspace_copy() hacks that CAPS was previously using. No other Cleanup libcaps to support recent LWKT changes. Add TDF_SYSTHREAD back Allow the child priority (receive side of the pipe test) to be specified Cleanup the forking behavior of the CAPS client test program. Add missing sf_buf_free()'s. Get rid of the upper-end malloc() limit for the pipe throughput test. Implement a convenient gd_cpumask so we don't have to do 1 << gd->gd_cpuid Fix an unused variable warning (non-operational). Enhance the pmap_kenter*() API and friends, separating out entries which Make buildkernel's require a buildworld to be done first, because they Correct the commented-out example for MODULES_OVERRIDE. In the sysclock commit I tried to make 'boottime' a fixed value, but it Fix bugs in xio_copy_*(). We were not using the masked offset when Create a normal stack frame in generic_bcopy() to aid debugging, so Cleanup NXENV so it works properly when running buildworld from FreeBSD. Perl is no longer needed by buildworld/buildkernel. Setting the date/time does not always properly write-back the RTC, causing Fix a missing wildcard binding in the recent wildcard binding hash table work. Quake 3 server (running under linux emulation) was failing with odd ' Undo the last commit. Utility programs which install c++ includes have no Fix buildworld. Document TOOLS_PREFIX and USRDATA_PREFIX, improve INCLUDEDIR Bring in FreeBSD 1.2.2.2. Properly unwind the stack when certain Partial sync from FreeBSD adds some more support and fixes. Also replace a Remove makewhatis from /usr/bin (it officially resides in /usr/sbin), per-cpu tcbinfo[]s aren't ready for prime time yet. The tcbinfo is assigned Export the lwkt_default_*() message port default functions so other Subsystems which install an so_upcall may themselves call socket functions Do not reset %gs in signal handlers, some programs depend on it (KDE in Protect nfs socket locks with a critical section. Recheck rep->r_mrep just Use hex bit values instead of decimal bit values (non operational change). General netif malloc() flags cleanup. Use M_INTWAIT or M_WAITOK instead General bus malloc() flags cleanup, M_NOWAIT -> M_INTWAIT. Note: leave General ata malloc() flags cleanup. Use M_INTWAIT where appropriate and Make TCP stats per-cpu. Enable propolice (stack smashing detector) by default on gcc3. Make TCP stats per-cpu. (forgot to add new header file) TCP statistics structure renamed tcpstat -> tcp_stats. /tmp/motd* files were being left sitting around after a reboot when the namecache work stage 4: namecache work stage 4a: Do some minor performance cleanups with negative Introduce negative (ENOENT) caching for NFS. Before this, an attempt to ANSIfication and style cleanups. Non operational. Do some minor critical path performance improvements in the scheduler Add vfork/exec perf test. exec1 tests static binaries, exec2 tests dynamic Use the sf_buf facility rather then kmem_alloc_wait/pmap_kenter/kmem_free Fix the conditional used to determine whether psignal() should be called. Remove the now obsolete /usr/include/g++. Cleanup after the nawk->awk ANSIfication/style cleanups (non operational) The malloc() call in at_fork() needs to use M_WAITOK instead of M_NOWAIT. get rid of TCP_DISTRIBUTED_TCBINFO, it only added confusion. res_search only incremented got_servfail for h_errno == TRY_AGAIN *AND* uio_td might be NULL, do not indirect through uio_td to get to td_proc Run the exec test for 5 seconds instead of 1 to improve measurement Implement lwkt_abortmsg() support. This function chases down a message and Detect when the target process's thread is sitting on a message port and Change WAIT_FOR_AUTO_NEG_DEFAULT to 0. Do not wait for auto-negotiation to netisr_queue() needs to reliably allocate the message used to reference the Use vm_page_hold() instead of vm_page_wire() for exec's mapping of the first Revamp the initial lwkt_abortmsg() support to normalize the abstraction. Now When an mpipe was being destroyed, each element in the array was being Followup commit, redo the way the root file descriptor slop is calculated Fix a netmsg memory leak in the ARP code. Adjust all ms_cmd function Fix a race in user_ldt_free() against an interrupt (which attempts to M_NOWAIT -> M_WAITOK or M_INTWAIT conversions. There is a whole lot of net Use M_INTWAIT instead of M_NOWAIT in the ip messaging redispatch case to The temporary message allocated to execute a connect request is not M_NOWAIT to mostly M_INTWAIT conversions, with a splattering of Correct type-o in last commit. oops. More M_NOWAIT -> M_INTWAIT | M_NULLOK conversions, primarily atm and ipsec. posixlocks resource limit part 1/4: Add support to the login.conf database, posixlocks resource limit part 2/4: Add support to /usr/bin/limits. Revamp UPDATING with separate instructions for upgrading from sources buildiso was assuming a native obj hierarchy when running the make distribute msync(..., MS_INVALIDATE) will incorrectly remove dirty pages without Fix a client tail -f vs server-appended file data corruption case by #ifdef out the PCATCH/CURSIG code for userland (libcaps), it only applies Support for more video modes: accept mode names like MODE_<NUMBER> where If the server goes away while the client is trying to copy a message from Fix a bug noted by David Rhodus and removes minor redundancy. nextsoftcheck must be a volatile pointer, not a pointer to a volatile. Bring in the following revs from FreeBS-4: Revamp the PIPE test a bit. Use a calibration loop to make the test always Due to pipe buffer chunking the reader side of the pipe was not traversing Add mem1 and mem2 .... memory copying and zeroing test suites, making it Rewrite the optimized memcpy/bcopy/bzero support subsystem. Rip out the Make hash tables one power of 2 larger so they don't (generally) fold Make SF_BUF_HASH() into an inline routine, sf_buf_hash(), and add an Remove the (now non existant) i486_bzero assignment for I486_CPU. Correct a bug in the last FPU optimized bcopy commit. The user FPU state Fix a race in npxdna(). If an interrupt occurs after we have set npxthread Fix a race in the FP copy code. If we setup our temporary FP save area Fix another bug in the recent bcopy revamp. The range checking was Clear npxthread before setting CR0_TS. Add bcopyb() back in for the PCVT driver. bcopyb() is explicitly Commit an update to the pipe code that implements various pipe algorithms. We must pmap_qremove() pages that we previously pmap_qenter()'d before (non bug) The hash routines are never called with a size of 1, but make ipstat needs thread2.h for MP stuff and mycpu. A memory ordering barrier is needed in crit_exit() ensure that td_pri ip6_input() must call the IP6_EXTHDR_CHECK() macro with a specified return Fix an exit-race with ^T. If a process is exiting it may be detached from Another major mmx/xmm/FP commit. This is a combination of several patches pmap_qremove() takes a page count, not a byte count. This should fix Do not trust the third-party ACPI code. Track memory mapping requests Use M_INTWAIT instead of M_NOWAIT for the rest of the acpica support code. Document the fact that SYSTIMERS operate without the MP lock. Followup, fix some missing ODFM->OFDM conversions. One of the lf_create_range() calls in lf_clearlock() was passing a bogus Add the filesystem/NFS stress tester program, adapted for BSD by Jordan Move the fsx filesystem tester program from /usr/src/tools/regression to Fix IPV6 listen(). It was simply a matter of a missing Peter Edwards brought up an interesting NFS bug which we both originally lf_setlock() was not returning the correct error code due to an 'int error' lf_alloc_range() must initialize the returned structure sufficiently such that Add an assertion to sys_pipe to cover a possible overrun case and reorder Followup log-only addendum: It turns out that last commit did not solve Fix a bug in sys/pipe.c. xio_init_ubuf() might not be able to load up the Attempting to access a device which has been destroyed, such as a UMASS sf_buf_free() requires a critical section to safely manipulate the free Remove DIAGNOSTIC elements that are no longer correct. Close an interrupt race between vm_page_lookup() and (typically) a malloc() flags cleanup and fixes. Get rid of M_NOWAIT in places where the sf_buf_ref() needs a critical section. Note that the function is not device switch 1/many: Remove d_autoq, add d_clone (where d_autoq was). INCSLINKS cannot be used to make softlinks within machine/ because Fix another race in ^T. ttyprintf() can block, during which time the Fix more ^T panics. calcru() and p_comm also need p_thread checks. Just Bring in a better seeding and random number generation algorithm from Device layer rollup commit. Using the new contrib rev mechanism, bring cvs up to 1.12.8. Add patch files Make the primary PQ_ macros available to modules by creating the pageq Cleanup compile issues with the recent dev_t changes. Cleanup compile issues with the recent dev_t changes. Cleanup pass. No operational changes. Get rid of obsolete PQ_ options. Cleanup pass. No operational changes. Get rid of VM_WAIT and VM_WAITPFAULT crud, replace with calls to Pass the proper mask/match arguments to cdevsw_add() so the various sound The extended mbr handling code was improperly using the b_dev from a ANSIfication and cleanup. No operational changes. Increment v_opencount before potentially blocking in dopen to avoid Cleanup cdevsw and dev reference counting ops to fix a warning that occurs ANSIfication and general cleanup. No operational changes. count_udev() was being called with the wrong argument. preload_delete_name() needs to use the same path munging that the other Cleanup warnings. No operational changes. Cleanup warnings. No operational changes. Note that the original ACPICA Correct a bug in the last commit, udev must be assigned from vp->v_udev, not ANSIfication and cleanup. No functional changes. Mask bits properly for pte_prot() in case it is called with additional Bring in the fictitious page wiring bug fixes from FreeBSD-5. Make additional Attach bind-9.2.4rc4 to the base system. Rip out bind-8 binaries and add Initialize the FP unit earlier in the AP boot sequence. This solves Add lwkt_setcpu_self(), a function which migrates the current thread to Make sysctl_kern_proc() iterate through available cpus to retrieve the Two unused arguments were recently removed from pmap_init() without In the root fs search use the correct unit number when checking for When doing a restart, sleep for 0.1 seconds after the kill to avoid racing Fix the path to named's pid file. Since RNDC does not yet support 'restart', use RCNG to restart named Add RLIMIT_POSIXLOCKS support to csh/tcsh. Add a pfil_has_hooks() inline to shortcut calls to pfil_run_hooks(), ANSIfication. No operational changes. VM86 calls on some BIOSs, apparently mainly VESA calls, use 8254 timers that ANSIfication and cleanup. No functional changes. Clean up some misuses of bp->b_dev after a strategy function has completed ms_cmd has changed to a union, update the test code. ANSIfication. No functional changes. Use MPIPE instead of (the really aweful improper use of) mbuf_alloc() for TEST Remove dtom() in unused (#ifndef notdef'd) code. Well, just remove the whole Use MPIPE instead of the really hackish use of m_get() and mtod()/dtom() Use MPIPE instead of misusing m_getclr() (i.e. m_get) for NETNS's Add MPF_INT (for mpipe_init()), which allows one to specify that an MPIPE Use a normal malloc() for PCB allocations instead of the really aweful mbuf Remove dtom(). dtom() is no longer supported (precursor requirement for async syscall work: The async syscall code got dated by recent LWKT Allow dup_sockaddr() to block, otherwise the code becomes non-deterministic. Rearrange the kern_getcwd() procedure to return the base of the string M_NOWAIT is just wrong in the init code. The allocation must succeed. Add the MSFBUF API. MSFBUFs are like SFBUFs but they manage ephermal Deal with revoke()'d descriptors. The underlying vnode is vgone()'d, which The usbcom's device was being released too early in the close sequence. Remove the canwait argument to dup_sockaddr(). Callers of dup_sockaddr() Add the prototype for the recently added XIO API call xio_init(). Back out the last change. Normal 'make' builds in the source tree are Remove dtom() from comment. Add in_pcbinfo_init() to encapsulate basic structural setup (right now just Recent accept() changes started depending on the protosw->pr_mport field Additional listhead->pcblisthead and marker support for netinet6. Convert netproto/ns to the pr_usrreqs structure. This is untested work ulimit.h needed to be added to the include file list for installation. Fix improper vm_object ref counting in procfs that was introduced in the last Both 'ps' and the loadav calculations got broken by thread sleeps, which Fix a bug in the reply port path related to aborts. Aborted messages are ANSIfy the remaining K&R functions. Remove two bzero-after-malloc's in start_forked_proc() should not be called when rfork() is called without Fix a race with the clearing of p->p_session->s_ttyvp. NULL the pointer Clean up GCC3.3, rip out all the weird search paths it adds and fix a long Make sure gcc_local.c and any left over patch mess is cleaned up by make clean, Don't set prefetch mode on VIA chips, it causes problems on newer chips and Write a remote configuration utility called 'rconfig'. This initial chdir into WorkDir before running the downloaded script. A sample rconfig script which completely wipes and reinstalls dragonfly Fix an aggregious non-terminated buffer issue and also fix the retry code Merge from vendor branch GCC: Bring in a trimmed down gcc-3.4-20040618. Allow CCVER to be empty, which will cause the compiled-in default compiler 'lrint' is a gcc3.4 builtin and gcc3.4 is unhappy if we use that for anything Fix a prototype error in gcc-3.3's com.h in order to allow gcc-3.4 to compile Remove the last vestiges of -DCCVER from 2.95.x contrib so we can remove Add a missing break after default: for gcc-3.4 support. Hook gcc 3.4 into the buildworld. Rearrange HOST_CCVER so it becomes the gcc-3.4 cleanup. Adding missing break after default: (gcc-3.4). gcc-3.4 cleanups. Add missing break statements, deal with goto labels, gcc-3.4 cleanups. missing break / goto labels. Correct a known panic in the twe driver, M_NOWAIT -> M_INTWAIT work. General M_NOWAIT -> M_INTWAIT work, except in periodic timeout() routines Some more M_NOWAIT->M_INTWAIT. Not sure if the sym changes are correct, Fix -j builds for gcc-3.4. The .nx build in cc_tools was breaking Fix -j builds for gcc-3.4. The .nx build in cc_tools was breaking Turn propolice (-fstack-protector) on by default. Propolice rearranges Unbreak the buildworld by fixing a cc_tools dependancy on cc_prep in Check for a queued interrupt being dispatched after the ATA driver Fix an improper DELAY in the ata tag code (but nobody should be using Do a partial synch from FreeBSD-5 of the NVIDIA and NFORCE ATA setup code. Hack in the code from FreeBSD-5 to set the timings for NVIDIA/AMD chipsets. Fix a broken vrele() in the session tty exit code. The route table treats sockaddr data as opaque, which means that the unused Backout 1.19. It prevents some machines from booting (READ timeout). Get rid of the PFIL_HOOKS option, integrate pfil in the system permanently. Bring in 1.33 from FreeBSD-5: Convert the M4'd btxldr.s to a preprocessed btxldr.S (taken from FreeBSD-5) Implement duel-console mode and make it the default. Adjust the boot Remove a vn == vp case that was breaking out of the namecache lookup loop Stage 1/999: Rewrite boot0 to relocate to higher memory (beyond 64K). This Add a signature for bootn0 so boot0cfg doesn't complain about it, even If Preposterous BIOS basemem is detected, instead of truncating to 640K Cleanup a comment. When a kernel-created thread exits, properly remove it from gd_tdallq and Some general cleanups and use M_ZERO instead of bzero() in one case Add some early serial console debugging support to the loader. This is all Unhook bios_howmem (a simple reporting variable), it isn't ready yet. Add a short unconditional sleep in the lockf retry path to try to Reorganize the subdirectories into an include/ subdirectory so the ndp was using the old contrib/tcpdump instead of contrib/tcpdump-3.8.3 Enhance lockf's debugging with macros. Print the originating process in Move include path work for named related utilities. Remove 28.cvs for now (the files involved do not exist in the latest contrib) Update contrib/cvs to contrib/cvs-1.12.8, but note that this Makefile Bring boot2 and the loader into line with our new dual-console support. Do not generate the 'Block size restricts cylinder groups to BLAH' warning Do not try to chflags() a symbolic link when copying an underlying filesytem Do a bit of cleanup and enable the SIO FIFO (1655x) to reduce latencies Fix a minor bug in the auto-console selection (handle the -m mute option Be ultra conservative for now, do not try to initialize the FIFO. More missed named fixups related to the include directory move. Bring in YONETANI Tomokazu's acpi-update-2.patch (27-May-2004), a major Add note to cpu_idle_hook (which is currently asserted so the code doesn't Show a more complete listing of interrupt sources (do not weed out sources Make the VR device backdown to emergency polling if the interrupt appears Implement livelock detection for threaded interrupts and automatically The schednetisr() routine is supposed to be MP and interrupt safe, but wasn't Properly probe for the serial port. If the serial port is unmapped or Addendum: it should be noted that boot2 also probes for a valid serial Turn the getty on on ttyd0 by default so a CDBOOT will run all the way through The acpica-unix-20040527 download from intel seems to like to use upper Fix three bugs in the livelock code. Fix a minor range error in an Increase PCCARD_CIS_SIZE from 1024 to 4096 as per FreeBSD-5. Add range Update the README file with useful information about ACPI. Undo one of the recent optimizations I made (only running the handlers Change the version string to RC1 Fix a snafu in the last commit. In the normal non-polling case interrupts Add an explanation for the use of CCVER in /etc/make.conf, and recommend Make sure a serial port exists by determining whether it is possible to drain Add AGP support for the i852GM, i855GM, and i865G. When booting from CD, check cd1a and acd1a after cd0a and acd0a, allowing Get rid of some debugging printf's that had been accidently committed. minor cleanups / no functional changes. Bring in acpica-20040527 from intel. See: acpica5 update part 1/3: Implement support for acpica-unix-20040527. acpica5 update part 2/3: Fix a bug introduced in the original acpica5 acpica5 update part 3/3: Bring the usr.sbin/acpi tools into the base system Cleanup conditionals on exit, remove unnecessary free()'s prior to error exit. Fix a bug in the -i code where an existing interface might not be located. More optarg and declaration cleanups. Update newvers.sh to RC2 Don't allow a step size of 0 (leading to an infinite loop). Have Be more verbose when printing information on transfer phase errors. Unconditionally reset ATAPI-CD devices during boot. Brian's notes on this: In ip_mport() (IP packet demux code), check the minimum length requirement Bring in a bunch of updates from NetBSD: Bring EHCI up-to-date with NetBSD. The most serious fixes are 1.53, 1.55, Julian Elischer posted an interesting proof-of-concept to freebsd-current ugenbuf is associated with the 'ugen' device, not the 'ugenbuf' device. Give the newly created ugenbuf.c the standard DragonFly copyright. Give ugenbuf the standard DragonFly copyright. IPS was using malloc flags of 0 (which is no longer allowed). The helper Add a missing '$' to the FreeBSD cvs tags. Fix some issues with the pccard shutdown path (during reboot and halt). There was a mountlist race in getnewvnode() whereby the system could block (installer support). We are going to have a special 'installer' login Import the new nrelease Makefile packaging and root template infrastructure The temporary 'installer' user is not supposed to have a password. The release password databases must be regenerated after installing the The password file the installer writes to the HD should not have an Update the version to 1.0-RELEASE. Update the README to include a description of the installer and add some Give the dfui packages a version number (1.0). Implement serial console Resynch the pristine ttys for the installer with the base system ttys (put Adjust the copyright to the new official DragonFly copyright. Update dfuibe_installer to 1.0.1 to fix a series slice corruption issue. Update release to 1.0A Note that Jeff indicated to me that Jonathan Lemon gave his permission to Update all my personal copyrights to the Dragonfly Standard Copyright. Fix a URL displayed in an advisory. Minor documentation update to clarify the effect of the vfs.usermount sysctl. Compensate sockstat for the CPU column that netstat now adds. Don't let packets with DF set sneak by through the hardware-assisted Fix two serious bugs in the IP demux code. First, if ip_mport() m_pullup()'s Bring the twiddle fix in from FreeBSD5:1.11. This rev also added a Brute force a register save/restore for int 0x13 (disk I/O) and 0x10 (putchar) Sync bootn0 with recent boot0 fixes. Consolidate most constant memory addresses in bootasm.h part1/2. Convert Consolidate most constant memory addresses in bootasm.h part2/2: Correct a bug in NXCCFLAGS generation. MAJOR BOOT CODE REVAMP / 30 hour+ hacking session (50 if you include the SCSI CD devices require 'cd0c' to be specified instead of 'cd0a', while Unconditionally print startup 8254 and TSC calibrations. ata-raid associates raw ata disk devices to record the raid setup and checks Add a dire warning about PCI_ENABLE_IO_MODES. Fix a device pager leak for the case where the page already exists in the Replace the perl man filter with a sed man filter, fixing manual page Update the userland scheduler. Fix scheduler interactions which were Adjust gd_vme_avail after ensuring that sufficient entries exist rather Have DDBs 'ps' command display additional scheduler-related paramters Make fstat() account for pending direct-write data when run on a pipe. Move usched_debug out of the INVARIANTS conditional. Make it unconditional. Add 'propolice' to the version string version_local.c sed patchup. Boot1 tries to clear boot2's BSS. It makes several assumptions that are Patch out tcsh's use of 'exp2', which is math-reserved in gcc-3.4. rename exp() to expx() to avoid conflict with gcc-3.4 built-in. rename functions that clash with reserved math procedures to avoid gcc3.4 udev2dev() can return NODEV now, make sure it doesn't crash autoconf's Change the default syslogd flags from -s to -ss, which prevents a network (From Alan): Output a more descriptive error message when AGP can't bind memory. Stage 1/many: mbuf/cluster accounting rewrite and mbuf allocator rewrite. Implement a kernel strdup() function (API synch with FreeBSD). Add a stack-size argument to the LWKT threading code so threads can be Add LWKT convenience functions lwkt_getpri() and lwkt_getpri_self(). Move kthread_create() from lwkt_thread.c to kern_kthread.c. Add a new add the 'y' and 'Y' options to ps, and add the 'iac' keyword. The 'y' Remove a recently added incorrect assertion. I was assuming that Add a test-and-set and release poll function. This is really just a hack Work to allow pure threads to issue VFS lookups: Only check p->p_ucred Work to allow pure threads to issue VFS lookups: fp_open() uses proc0's Work to allow pure threads to issue VFS lookups: (untested/experimental) Sync the IFM_MAKEMODE() macro from FreeBSD-5. Bring in NDIS emulation support from FreeBSD-5. NDIS is a Windows device Generally speaking modules should unconditionally enable things like NDIS_INFO -> NDIS_LOCK_INFO Fix more __stdcall issues. Move the __stdcall into the function typedefs Provide some basic instructions on how to create an NDIS wireless driver Add the ndiscvt utility from FreeBSD-5, which is used to compile windows Bring in the latest pkg_install sources from FreeBSD-5. Change sendfile() to use the new m_ext callback scheme for cleaning up after Get rid of mb_map. Retool the mbuf and mbuf cluster allocator to use Since mbufs are no longer limited by an mb_map the kern.ipc.nmbufs and Fix buggaboos that prevented ACPI_DEBUG from working. taskqueue_create() should use M_INTWAIT rather then M_NOWAIT. Add a global, clocks_running, which tells us when timeout/ticks based clocks Rip out the badly designed softint-based taskqueue used by ACPI for callbacks. Make doubly sure that timer2 is not used for speaker operation. tcp_input()'s DELAY_ACK() code checks to see if the delayed ack timer is The TCP stack is notified every time userland reads from the TCP socket The obj hierarchy must be built before the ssh-etc target can be run Merge from vendor branch CVS: Bring cvs-1.12.9 into the CVS repository Upgrade our CVS build from 1.12.8 to 1.12.9 to fix a number of pserver make the __asm for the pushfl fakery __volatile. GCC3.4's (default) unit-at-a-time optimization is incompatible with -mrtd. eventhandler_register() M_NOWAIT->M_INTWAIT. Add an event handler to adjust the cpu throttle state automatically when Remove the unconditional timer_restore in the bios call path, it is Do not hack a #define __FreeBSD_version if __FreeBSD__ does not exist, Improve compatibility with older FreeBSD-4.x systems when cross-building Add bzip2 to the bootstrap tools list. The compat libs (if enabled in ffs_dirpref() calculates dirsize = (fs->fs_avgfilesize * fs->fs_avgfpdir). The SF64-PCR card has no sound support but stupidly uses the same PCI id Merge FreeBSD ip.c/1.101, commit message: Synchronize syslogd with FreeBSD. Primarily syslogd.c/1.129. This primarily Bring in FreeBSD mount.c/1.58, original commit message: PPP stupidly hardwires some flag constants that it 'steals' from the mbuf Discard accepted and pending connections after we detach the listen socket Add a state to sanity check tcp_close() to make sure it is not called Get rid of the NO_TCSH make.conf variable. We do not support removing add 'read1', a program that tests reading one byte at a time from a file. The base/count bounds checking was insufficient, leading to a kernel memory Close a kernel mem disclosure bug in linprocfs. The uio_offset was not Oops, undo portions of the last commit, some extra work got committed that Have make upgrade remove two stale 80211 header files that can mess VFS messaging/interfacing work stage 1/99. This stage replaces the old Add a missing uio_td assignment (that unionfs needs). Add an installer-fetchpkgs target and other related stuff to reduce the VFS messaging/interfacing work stage 2/99. This stage retools the vnode ops Properly record and print 64 bit file sizes, do not truncate the file size Add some robustness to the error-requeue code. FreeBSD-5's (new) ata driver Test commits after machine upgrade. more testing more testing 3 sigh feh feh2 *** empty log message *** test yet more testing more testing *** empty log message *** *** empty log message *** bleh *** empty log message *** *** empty log message *** test [test format strings for cvs server config] Properly free the temporary sf_buf in uiomove_fromphys() if a copyin/copyout Make the buildkernel and nativekernel targets completely wipe and regenerate The -D__FreeBSD__ must be -D__FreeBSD__=4 or sendmail will not be properly Add the vop_ops for ntfs before obtaining the root vnode(s) rather then Remove the advertising clause where possible as per the directive from Include language describing the prefered method for recognizing authors as Add the standard DragonFly copyright with attribution to the author (which is Add missing extension return value for __byte_swap32_var() in the case The VFS work has made vnode_if.awk obsolete. Fix a SFBUF memory leak in sendfile(). We were not properly tracking M_EXT_CLUSTER was not being properly inherited in m_copym(), m_copypacket(), doingdirectory is really a boolean, use an int rather then ino_t and cast Minor cleanups, no operational changes other then to add an error message if Minor cleanups. Minor cleanups. Minor cleanups. Also, change various exit(10) codes to exit(1). Document the unorthordox use of getopt(). Check for a mkstemps() failure, generate a proper warning if the fopen() Minor cleanups. Document -j in usage. Cleanup various type-o's in comments. Output an error message if the open fails. Correct a mistake in the last commit that caused usage() to seg-fault, Get rid of dfly/fbsd4/fbsd5 checks for the ntohl() return type. We are VFS messaging/interfacing work stage 3/99: Bring in the journaling Turn off the getty on ttyd0 by default to avoid certain machines from Merge FreeBSD/1.212 and FreeBSD/1.213. These only appear to have an Bring in FreeBSD/1.214 - UC Regent's advertising clause removal per Bring in FreeBSD/1.218. TCPS_CLOSED is no longer 0 in DragonFly. Because ipfilter was assuming Sync VESA support with FreeBSD-CURRENT, adding support for cards that Since ip_input() truncates the packet to ip->ip_len prior to entering the VFS messaging/interfacing work stage 4/99. This stage goes a long ways Fix handling of the recycling of vnodes from a failed hash collision. Fix a bug that was causing a 'lockmgr: draining against myself' panic when Minor style cleanups. Start a list of obsolete functions that should no Add the -8 option to finger for 8-bit output (from NetBSD). pkill does not compile cleanly, remove it from the build. Remove some unnecessary extern's of libc functions. With the 'cu' program removed from the system (long ago), enable 'cu' Reenable pkill. There wasn't actually a problem with it, it turned out to Add VESA mode support for syscons. The vesa.ko module must be loaded and Fix a bug in sillyrename handling in nfs_inactive(). The code was improperly Correct a bug introduced in a recent commit. This fixes touch -t. Misc syntax cleanup. Minor non-functional syntax changes. Remove (void) casts, and remove Replace all occurences of strcpy by the safe strlcpy where needed. Avoid WARNS=2 error by renaming the 'print' variable to 'printerr' to avoid Make timed WARNS 2 compatible. Remove #ifdef sgi and related conditionals. Remove improper libc prototypes. Remove function casts to (void). Set OPENSSH_USE_POSIX_THREADS to 1 to work around a privilege separation Add a keyboard preference to kbd_init_struct(). When a wildcard keyboard is Use the new keyboard preference feature. Leave the USB keyboard registered with the keyboard subsystem even if the NULL out p_stats when detaching the underlying thread stack, since the The mount options matching code was incorrectly testing for string Temporary hack to remove historical schg protection on 'cu' to avoid With vnode locking now mandatory a number of bugs have cropped up in the Implement a convenient lwkt_initport_null_rport() call which initializes Fix a badly written preprocessor macro. Make sio.S aware of when -mrtd is or is not being used, so it can implement scalb() takes two doubles, not (double, int) (though there appear to be Clean up struct session hold/rele management. The tty half-closed support ANSIfication/cleanup, no functional changes. Give the MP fields in the thread structure useful names for UP builds so Add the 'M' status flag to indicate those proceses or threads which timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* When a umass storage device is unplugged there might be pending requests Improve error reporting when the cdevsw code detects problems. Fix cdevsw_remove() warnings related to the removal of mass media (e.g. Don't complain when a cdevsw with non-zero refs is being removed if it still 'vidcontrol show' was broken by the better vesa support commit. Fix it. timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* Remove a redundant bzero (which also specified the wrong length in anycase timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* [ Adding missing callout_init()'s ]. Move all the softclock/callout initialization out of MD and into MI. Get rid Remove a superfluous (and incorrect due to the recent callout changes) bzero() Unbreak the SCSI drivers. Move the callout_init() from xpt_get_ccb() to Oops CALLOUT_DID_INIT had the same flags value as CALLOUT_MPSAFE, causing *** empty log message *** missing log message for last commit: Rearrange the mbuf clearing code in Fix a race on SMP systems. Since we might block while obtaining the MP Make the freeing free mbuf assertion a bit more verbose. Fix a number of races. First, retain PG_BUSY through a vm_page_remove(), Add a missing free to a failure case (non critical). Fix a type-o / syntax error in the last commit. Add missing callout_init(). Joerg pointed out that callout_init is called by the SYSINIT. Reorganize Fix the ncpu check for 'ps' so it does not display the 'M' flag on UP timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* [structural fields appeared not to be used, timeout/untimeout ==> callout_* malloc() M_NOWAIT -> M_WAITOK, plus remove bzero's in favor of M_ZERO. timeout/untimeout ==> callout_* Add DragonFly-stable-supfile and cleanup the comments in it and timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* timeout/untimeout ==> callout_* cleanup syntax a bit. non functional changes._*. NDIS hides the callout in a windoz Remove timeout() and untimeout() plus all related support. Not having to testing Quick cleanup in preparation for a more complete cleanup. Add a missing #include and add a missing semicolon. Do not unconditionally fork() after accept(). accept() can return -1 due if_pfsync.h needs pfvar.h for struct pf_addr. The ioctl header collection Use the appropriate #defines instead of hard-coding constants. Here are (BSD licensed) replacements for bc and dc: M_NOWAIT -> M_INTWAIT/M_WAITOK. Plus fix a NULL free() that can occur if M_NOWAIT -> M_WAITOK Hook the new bc and dc into the tree. Unhook the old gnu bc/dc and add Fixup the location of openssl / fix release builds. Remove unused variable. Add the -L and -l options to install. -L allows an alternative /etc directory Add support for the LinkSys EG1032 GigE PCI card, and add support for The inode size must be unsigned-extended to a quad, not sign-extended, so Fix a bug where DragonFly's nat was closing TCP connections every 10 minutes. Only assume a numeric group id if the ENTIRE string is numeric. VFS messaging/interfacing work stage 5/99. Start work on the new Document the additional vendors applicable to the 'sk' driver. Get rid of some conditionalized code which the pmap invalidation API took The wrong vendor-id was used when adding Adaptec ServeRAID Adapter support. VFS messaging/interfacing work stage 5/99. Start work on the new Clarify an element in the BUGS section related to the sticky bit. VFS messaging/interfacing work stage 5b/99. More cleanups, remove the Minor cleanups to TIMER_USE_1 (no real fixes or anything). VFS messaging/interfacing work stage 6/99. Populate and maintain the Attempt to make the boot code operate in a more deterministic fashion. VFS messaging/interfacing work stage 7/99. BEGIN DESTABILIZATION! VFS messaging/interfacing work stage 7a/99: Firm up stage 7 a bit by Remove ATTY. Unsetting an undefined variable or function is not an error. Enforce directory creation ordering for subdirectories to fix a make -j VFS messaging/interfacing work stage 7b/99: More firming up of stage 7. Make patch's -b option match the old patch's -b option, at least for now, Add a debugging utility which dumps that kernel's namecache topology. Only include opt_vmpage.h if _KERNEL is defined. VFS messaging/interfacing work stage 7c/99: More firming up of stage 7. VFS messaging/interfacing work stage 7d/99: More firming up of stage 7. VFS messaging/interfacing work stage 7e/99: More firming up of stage 7. Update ncptrace.c to handle DragonFly_Stable and HEAD. Add the -I option to rm. This option asks for confirmation just once if Do not run getty on ttyd0 by default. The installer's version of ttys already alias rm to rm -I by default in interactive shells. rm -I is a safer, more Last commit inspired by: Giorgos Keramidas <keramida@freebsd.org>, who If we get an EOF in check2(), terminate the yes/no question loop rather VFS messaging/interfacing work stage 7e/99: More firming up of stage 7. Add vnodeinfo - a program which scans each mount's vnode list and dumps VFS messaging/interfacing work stage 8a/99: Sync other filesystems to stage 7 Fix a bug in the tty clist code. The clist code was only protecting itself The last commit was improperly documented as being VFS related. It was VFS messaging/interfacing work stage 7f/99: More firming up of stage 7. VFS messaging/interfacing work stage 7g/99: More firming up of stage 7. Fix the -N and M options for ncptrace. VFS messaging/interfacing work stage 7h/99: More firming up of stage 7. Conditionalize the flag->string conversions so pstat compiles for both the Add conditionals so vnodeinfo compiles for both old and new api kernels. Add the standard DragonFly copyright. Bring in a fix from NetBSD for hid_report_size(). This fixes a detection Fix a USB stuttering key issue. Add a missing agp_generic_detach() call if a bad initial aperture size Make a DFly buildworld work on a FreeBSD-4.x system again by fixing VFS messaging/interfacing work stage 8/99: Major reworking of the vnode Try to close an occassional VM page related panic that is believed to occur test commits@ list 1/2 test commit@ list 2/2 Make a chdir failure fatal rather then just a warning, otherwise pax could Forgot to add for last commit: Avoid redefined symbol warning when libcaps uses thread.h with its own Do not use non-blocking malloc()'s in the busdma support code. A lot of Initialize the 'kernel' environment variable from loader.4th When pxebooted loader is detected not only change the kernel name, but Add /boot/defaults/loader-bootp.conf, a separate default configuration Final cleanup. After giving up on trying to avoid having two loader*.conf Remove the 'ether' module dependancy, it is not defined anywhere and will Add the "nv" interface, and enable it in the bootp case since the netif/nv The forth code is a real mess, things are repeated all over the place. Augment vnodeinfo to retrieve and display the number of resident pages in *** empty log message *** Add devices da4-da15 and ad4-ad7 to MAKEDEVs 'all' for convenience. Plug-in Create a softlink from /kernel to /kernel.BOOTP on the CD. Mount Implement hotchar support for ucom. Fix a bug where sc_ctlp() is improperly called when the packet is passed up Adjust rm's usage and manual page synopsis to include the new -I option. oops, forgot a mention. Last commit: MFC FreeBSD if_em.c 1.48 and 1.49. Fix bugs in the limiting code for negative-hit namecache entries. The system Fix an assertion, vgone() now requires that the vnode be VX locked and refd. Old API compatibility: The directory vnode passed to VOP_LOOKUP() must be The last commit failed to adjust the alignment like it said it did. Fix a boot panic with the amd device. We inherited some busdma code from Change the default handling for kernels built with debugging info (DEBUG=-g). Set the file modes for those rc files which are unconditionally replaced by Fix a final bug in the vfs cache cleaning code. An incorrect assertion was Make an existing vnode lock assertion a bit more verbose. The old lookup() API is extremely complex. Even though it will be ripped out This is a really nice rewrite of rc.firewall that cleans it up and adds Fix bugs in the last commit. Some islink checks and a fd->fdval change was Generate the BRANCH field from the current tag, use CURRENT if the current Synchronize a few libi386 issues from FreeBSD. Fix a bounce buffer bug, Add a default for X_WINDOW_SYSTEM in the dfports override case. Synchronize bug fixes from FreeBSD/RELENG_4. Fix a compiler warning by pre-declaring struct vnode; Remove the vfs page replacement optimization and its ENABLE_VFS_IOOPT option. Document bus_dmamem_alloc() a bit more. Fix bugs in the vm_map_entry reservation and zalloc code. This code is a bit Fix a seg-fault if -l is used without a /dev prefix, e.g. cu -l cuaa0. Of Add a section to UPDATING describing the users and groups that might have Fix a NULL pointer dereference panic that occurs when the TCP protocol null_revoke() needs to return 0. Remove unused variable. unmount was not removing the negative hits associated with a mount point. Conditionalize _CTYPE_SW* for bootstrap purposes. The last fix wasn't good enough. This one causes the SWIDTH lines to be The PRId64 check was wrong, causing bootstrapping failures on Bring in Jeff Wheelhouse's CLOG / circular log file support for syslogd, Fix 'route add -host <target> -interface <interface_name>. This was Remove incorrect cache_purge() calls in *_rmdir() (OLD API). These could Note last commit: changes were made primarily to avoid global variable name Fix a possible remote DOS against pppd, described in detail at Save and restore the 'version' counter file when doing a full buildkernel Add humanize_number(3) and split the trimdomain(3) function out from Bring in some more PCI ID's from FreeBSD. Give makewhatis an absolute path to make upgrading etc from single-user Fix a Trident DMA limitation. Symptom as reported by Brock was that his Remove an assertion that used to double-check the cred passed into vn_open(). Correct a typo and a mdoc(7) style issue. The last commit created a memory leak because 'buf' is static. Fix that, Add tip(1)'s emulation of cu(1) to tip's manual page. Sync make.1 with the rest of the make source. Do not explicitly set PCIM_CMD_SERRESPEN or PCIM_CMD_PERRESPEN. This was Correct a softupdates bug, an ir_savebp buffer was not being properly vfs_object_create() was being called too early on devvp in the FFS mount Fix another minor bug-a-boo inherited from 4.x sources, the wrong indirdep Create a non-blocking version of BUF_REFCNT() called BUF_REFCNTNB() to be Add lockcountnb() - a non-blocking version of lockcount() to be used only vmpageinfo is a program which runs through the vm_page_array and the The min() and max() macros in sys/libkern.h are typed u_int and thus do not Fix a very serious bug in contigmalloc() which we inherited from FreeBSD-4.x. The 'start = 1' change wasn't needed, revert that one little bit back to VFS messaging/interfacing work stage 9/99: VFS 'NEW' API WORK. Correct minor tinderbox -DDEBUG error. Add a 'preupgrade' target which creates any missing users and groups Temporarily remove union and nullfs from the vfs build list, they are Default vfs.fastdev to 1 for wider testing, so the vnode bypass for device Forced commit, correct comment for last commit. 1.25 has nothing to do with Add a missing com_unlock() to the serial port drain test code. This code Remove various forms of NULL, and cleanup types. This is a partial sync from The strings package passes back a pointer via brk_string() to internal efree()->free(). remove #define efree (which was defined to free). Do some deregisterization. Partial sync from FreeBSD/jmallet: Diff reduction for great justice against NetBSD, cast to unsigned char when Make the DEBUGF() macro portable by (ugh) adding a Debug() function, which Convert make(1) to use ANSI style function declarations. Variable Possibly expand the variable name's embedded variables before using it, as Split var.c into var.c and var_modify.c and move all the modification funcs Split var.c into var.c and var_modify.c and move all the modification funcs Make both arguments to str_concat() const char *'s and remove STR_DOFREE Spelling corrections. Add support for the ATI Radeon 9600 XT and XT_S. * Restore indentation to tabs. Partial sync from FreeBSD, add dummy syscalls for extended attribte direct sysargs to /dev/null for the emualtion system call sets to avoid a Continuing synchronization from FreeBSD. Fix an inverted conditional which could lead to nameBuf being truncated in Temporarily change the net.inet.tcp.sack default from 1 to 0 after confirmed Fix a bug in the checking of malloc()'s return value. It turns out to have Cave in and remove NULL checks for M_WAITOK mallocs. DragonFly's M_WAITOK Follow NOFSCHG if defined. (It's needed to be able to run make installworld Remove unused junk from the slab allocator. Do a cleanup pass on the mbuf allocator. Reorder the mmbfree cache tests Bring in various fixes from FreeBSD: Clear the NOCORE flag on any text mappings that the RTLD modifies due to Lots of bug fixes to the checkpointing code. The big fix is that you can Cleanup some dangling issues with cache_inval(). A lot of hard work went Re-enable SACK by default. Jeff fixed the corruption issue reported by Give init the ability to chroot to a directory based on kernel environment Fix a conditional. sdl was not unconditionally being checked for NULL. Add code to the BIOS VM86 emulator to detect writes to the 8254. If a sendfile() was seriously broken. It was calling vm_page_free() without Fix a number of SMP issues. Cleanup the 'cache_lock: blocked on..' warning message. Fix a minor Add support for adjusting the interrupt throttling rate via The fp argument to vn_open() is optional but the code wasn't treating it Fix the PRI field to not display bogus process priorities for pure Fix a bug in chown, chmod, and chflags. When the setfflags(), setffown(), Document the mandatory use of vget() prior to modifying vnode/inode ops. There is enough demand for Kip Macy's checkpointing code to warrent Add the sys_checkpoint(2) manual page and expand checkpt(1). Remove checkpt/ from Makefile.modules. checkpt is not integrated into the Remove the checkpoint module. checkpointing is now integrated into the kernel patch-4.10: Clean some includes and remove ifdef __STDC__, -Wall cleanup, o Print a warning when we are given two scripts for one target. o Pacify ``make -f /dev/null -V FOO''. o Fix proto type o Give make(1) the ability to use KQUEUE to wait for worker Cleanup some ESTALE issues on the client when files are replaced on Fix a bug in ESTALE handling for NFS. If we get ESTALE in vn_open() we Fix the boottime calculation when the time of day is set in absolute terms. Unlock the namecache record when traversing a mount point, then relock and test Bring in FreeBSD/1.206 by Alan Cox, bde@, and tegge@: Bring in elements from the FreeBSD usbdevs that the DFly usbdevs does not vm_page_free_*() now requires the page to be busied, fix a case in Do not reinitialize the translation mode if reattaching to an existing Document vmpageinfo.c Fix a diagnostic check related to new VOP_INACTIVE semantics. Fix field for recent radix.h cleanups. Remove bool and boolean_t typedefs from header files where they don't belong, Correct a bug where incoming connections do not properly initialize the 'bool' is a really bad name for a variable, rename it. Fix the keyboard lockup problem. There were two big clues: The first was a VFS messaging/interfacing work stage 10/99: Synchronize usbdevs with NetBSD and regenerate. Cleanup missing and duplicate defines from the last commit, rename an Temporarily allow recursion on locks to deal with a double lock in the The dos slice scanner was incorrectly including extended partition entries There seems to be a race during shutdown where ifa->ifa_addr can become Correct conditional which would always make kcore_open() fail and return Move the CCVER override for the release build from a make command line Remove bogus DIAGNOSTIC code that checked if the process was SZOMB or SRUN Add missing .. Fix format %lx->%x for ntohl conversions in diagnostic warning printfs. Fix duplicate script warnings byt remove a duplicate .include of bsd.subdir.mk. Fix printf format specifier from %lx->%x for ntohl argument. Remove duplicate _EXTRADEPEND entry. bsd.lib.mk already has one. It is Remove duplicate .include of bsd.subdir.mk. It is already indirectly The ../Makefile.inc chain may be .included early (it is also included by Clean up "`cache' might be used uninitialized" warnings. These come from Move the doscmd: depdendancy to after the .include so it does not Correct the make target the user is told to use to fetch required Remove confusing comment. Make sure that cn_flags is properly updated to account for side effects The grouplist variable made local in the last commit was not being NULLed Do not loop 3 times in the interrupt processing code, it's an unnecessary Remove now-unused loop_cnt variable. Add a missing initialization for the error variable which resulted in netstat Add the -P flag which displays more PCB information (in particular, TCP). Make sure ntfs_lookup() has the correct side effects by ensuring that Try to be a bit smarter when closing down a ULPT device. Silence a compiler warning by adding parenthesis. Journaling layer work. Add a new system call, mountctl, which will be used Properly recognize a YUKON device's on-board ram. Fix to make jumbo frames work properly. Do not allow '.' or '..' to be specified as the last path component when Add support for tail -f'ing multiple files. part 1/2. Improve the printing of ==> filename <==. Do not print the same filename Fix a range check bug in lseek() Journaling layer work. Rename the proc pointer p to pp to clarify its use in portions of the tty nvp may fall through as NULL, check it before vput()ing. Get rid of dead non-DragonFly code. Journaling layer work. Lock down the journaling data format and most WARNS 6, cleanup compiler warnings, de-register, staticize, etc. Restore b_data prior to calling relpbuf(). This isn't really necessary but Journaling layer work. Add shims and skeleton code for most of the WARNS 2->6 and a minor code readability cleanup. Make sure the temporary .c file generated from the .y file is properly Fix a kernel crash that occurs when the SMB protocol stack is used. The Merge of FreeBSD rev. 1.36+1.37 of ip_nat.c. Conditionalize declarations Correct two bugs that may result in incorrect CBCP response for While removing a memory leak, rev 1.32 introduced a Do not specify the -B option when executing the sub-make. In the BSD Fix one of probably several smbfs issues. smbfs is improperly tracking Fix a memory leak in regex. This is the initial skeleton for the new mountctl utility. Manual page, Add support for retrieving the journal status via mountctl. Increase some Revamp the argument format a bit, add basic suppot for creating, deleting, Add /sbin/mountctl to the build. When re-connecting an already connected datagram socket be sure to clean Oops, undo accidental commit. The last commit was not related to the proc0 is still used by e.g. smbfs to fork off a kernel thread and certain Do not early terminate if ^C is hit just as a valid job is returned by Followup note last commit: FreeBSD PR/66242, FreeBSD/1.68 originally Add syscall primitives for generic userland accessible sleep/wakeup Add missing kern_umtx.c to sys/conf/files. Add umtx.c, a simple utility which implements userland mutexes using Minor correction in umtx_*() calls, the mutex pointer should point to Minor correction in umtx_*() calls, the mutex pointer should point to falloc() was not returning an error code on failure. When a PCMCIA networking card is removed the IF code may free() the network Add some descriptive comments. Change lockcount->lockcountnb in an assertion. Properly vget() vnodes that the syncer intends to VOP_FSYNC(), rather then Replace the cache-point linear search algorithm for VM map entries with The vnode reclamation code contains a race whereby a blocking condition may Repo-copy vinumparser.c and vinumutil.c from /usr/src/sys/dev/raid/vinum Disable hardware checksum support by default, it produces packet corruption. Tell the user more explicitly what port needs to be installed to get the Mount points use a special empty namecache entry to transition from one Do not leave VCTTYISOPEN set if our attempt to open /dev/tty fails, otherwise Fix the virtual 'status' file for procfs. The wrong length was being used, getblk() has an old crufty API in which the logical block size is not a Redo argv processing to better conform to standards. A NULL argv is no Add a sysctl to control 8254 bios overwrite warnings. Default is on. Take advantage of our new namecache topology to generate the actual paths Remove _THREAD_SAFE depenendancies. Create weakly associated stubs for encap_getarg() was not properly loading the pointer argument associated Fix bug in last commit that broke 'df'. 'sfsp' is now a structural pointer Count time spent in interrupts in the overall runtime calculation so ps Fix an invariant test that tries to catch locked tokens being left on the Add a intrmask_t pointer to register_int() and register_swi(), and make Fix type-o in last commit. The last commit broke ttyname(), which broke ssh -1. Fix that, plus Minor adustments to avoid a signed/unsigned comparison compiler warning Fix a hard-to-find bugaboo in the struct file list sysctl. The code was Generate more useful -v information on the console during device attach. m_clalloc() was improperly assuming that an mcl malloc would always succeed Annotate the class byte with a class name in the bootverbose pci "found->" One of the last things the system does before it tries to mount root is Revert the last device_print_child() change, it was too confusing to The last commit was not sufficient. Rework the code a bit to make it gdb-6 uses /dev/kmem exclusively for kernel addresses when gdb'ing a live Add support for pure kernel thread stack frames. Pure kernel threads do not Up the initial interrupt configuration hook delay to 20 seconds before Fix a case that can prevent the vnlru_proc vnode recycler from operating. Fix a cache_resolve() vs cache_inval() race which can result in a livelock. Greatly reduce the size of ISOFS's inode hash table. CDs and DVDs are small Add support for the Intel 82562ET/EZ/GT/GZ (ICH6/ICH6R) Pro/100 VE Ethernet. rcorder tries real hard to free things while processing the list but this Track the last read and last write timestamp at the device level and modify Add a manual page for the umtx_*() system calls. Implement TLS support, tls manual pages, and link the umtx and tls manual Implement TLS support, tls manual pages, and link the umtx and tls manual Add system call prototypes for userland. Improve the contigmalloc() memory allocator. Fix a starting index bug, Add DFly copyright to cache.c. This file was originally my creation in Have the getroot script chdir into /etc/namedb itself instead of relying Fix an issue that the iwi driver seems to hit, that of routing socket Rewrite the loops which extract the interpreter name and arguments out Add signal mask save/restore to the checkpoint code. Reorder the file Fix a firewall rule ordering problem, the 'OPEN' firewall mode was Allow the #! command line to be up to PAGE_SIZE long, rather then Journaling layer work. Generate output for path names, creds, and vattr, Clean up the XIO API and structure. XIO no longer tries to 'track' partial PHOLD is a throwback to FreeBSD that we don't actually need here. This off_t is a signed value. The last commit caused the kernel to fail with Also, do not use M_NOWAIT for an allocation that must succeed. This is a major revamping of our MSFBUF API. MSFBUFs are used to map Pass the memfifo option to the kernel. Make msf_buf_kva() return the correct base address. It was not including msf_buf_kva() now returns a char * rather then a vm_offset_t. Journaling layer work. Write the actual data associated with a VOP_WRITE Remove some of the nastier debugging printfs. msf_buf_alloc() can be called with a NULL vm_page_t when e.g. the first Fix an incorrect pointer in a journal_build_pad() that led to a panic. Add a simple msf_buf_bytes() inline which returns the number of bytes Fix a number of alignment and offset bugs that were corrupting the Replace references to TurtleBSD with DragonFlyBSD. Turtle was one of the Bring in the IWI driver from FreeBSD and merge in ALTQ support. Do some WARNS 6 cleanups. Add __unused, add a few type casts, and Bring in the IWI driver from FreeBSD and merge in ALTQ support. Bring in Damien's IPW driver. Change NOWAITs into WAITOKs. In FreeBSD NOWAIT allocations seem to be First cut of the jscan utility. This will become the core utility for Cleanup the debug output, fix a few data conversion issues. Sync with FreeBSD/1.103. In the softupdates case for ffs_truncate() we Add a couple more PCI IDs for the Intel ICH5 ATA100, ICH6 SATA150, pipe->pipe_buffer.out was not being reset to 0 when switching from direct Fix a serious bug in cache_inval_vp()'s TAILQ iteration through v_namecache. Add a function that returns an approximate time_t for realtime for Fix a couple of NFS client side caching snafus. First, we have to update Implement CLOCK_MONOTONIC using getnanouptime(), which in DragonFly is Bring in the minix editor 'mined', in its original form except for necessary Bring in PicoBSD changes from the FreeBSD Attic. Make ^L redraw the screen, like people expect. Support XTERM function key escape sequences. Add 'mined' to the bin build and add a minimal manual page for it. flesh out a tiny bit so people at least know what the help key is. Bring in some CAM bug fixes from FreeBSD. src/games WARNS 6 cleanups. Fix a bugaboo in the last commit. We cannot safely modify n_mtime based Fix a potential security hole by outputing non-printable characters related Close a possible security hole by using strvis() when displaying td_comm. Clean up a number of caching edge cases in NFS, rework the code to be Ignore additional stty control characters like ^Y which interfere with Add RCNG support for setting the negative attribute cache timeout, make Add a sysctl "net.inet.tcp.aggregate_acks" which controls the tcp ack Add Windoz autorun support to the CD to automatically pop up a browser with a Fix a crash in the firewire / DCONS code that occurs when the crom data Pick up changes in rev 1.8 of src/sys/dev/ic/mpt_netbsd.c from NetBSD. Rewrite a good chunk of MAKEDEV, commonizing paths, cleaning up device Fix a minor bug in pass*) generation. 'units' was not properly optioned. Do not allow the journaling descriptor to be a regular file on the same Start working on the full-duplex journaling feature, where the target can Reverse the order of dragonfly and FreeBSD since it says right after Allow an info.size field of -1 to indicate that the descriptor should map Because destroy_all_dev() checks the mask/match against the device's si_udev, Correct the th_off check against ip_len. The check in ip_demux occurs Additional note to last commit. GCC-3.4 improperly generates a warning Remove an assertion in bundirty() that requires the buffer to not be on Have the server complain if the NFS rpc stream from a client gets corrupted, Fix a server-side TCP NFS mount stream sequencing problem which could result Display additional information about a vnode's object. Sync with the kernel to make ncptrace work again. CINV_PARENT no longer Fix a bug where the main system clock stops operating. We were using a field Do a better job distributing RPC requests on the NFS server side. This The old ntpdate displayed the clock error by default. Change rdate to do Temporarily back out the last change due to time zone display issues loginfo/commitinfo test test 2 Add another syscall test, and test cvs commits after loginfo/commitinfo/config Don't hide the library build's ar/ranlib lines, or shared library link Cleanup and retool portions of the TLS support and make sure that Move the setting of sopt->sopt_dir down into the lower level sogetopt() The vnode recycler was not handling a hardlinking case that is capable WARNS=6 cleanup. General WARNS cleanups: staticize functions. Adjust strdup() WARNS cleanup of array initializer. WARNS=6 cleanup. WARNS=6 cleanup. Bring in some work from FreeBSD: A cache invalidation race that generates a warning was not properly WARNS?=6 cleanup for fstat. Add the 'webstress' utility. This program will fork a number of child A program which dumps the wildcard hash table for the TCP protocol for Fix a bug in the distributed PCB wildcardhash code for TCP. For the SMP Apply same bug fix as last commit to IPV6. Automatically calculate the kernel revision for release tags by extracting Make sure neither the boot loader nor the kernel is compiled with the Add cases for the nvidia nForce3 PRO and PRO S1. Who knows how well it Add some minor in-code documentation. libstand is used by the boot loader, make sure the stack protector is Correct an NFS bug related to ftruncate() operations. When the client Include fsid information in the mount point dump. Update the various DragonFly supfiles, rename some of them to make their add ports/polish The cdrom MNT_ROOTFS check was interfering with the NFS export handling, Add a hardwired dhcpd.conf for the installer's netboot server support. Sync up to the latest installer packages (with netboot server capabilities!). The release is 1.2, bump the HEAD of the tree to 1.3 for post-release Bump HEAD's official release #define to 130000. Clearly state when ACPI is overriding APM's device entries. Optimize lwkt_send_ipiq() - the IPI based inter-cpu messaging routine. Only bump the switch_count counter when lwkt_switch() actually switches Zero the whole structure, not just the name, so we don't trip up on this Implement Red-Black trees for the vnode clean/dirty buffer lists. Fix a bug in the last commit. A FIFO calculation was incorrect (3/2 instead staticize lwkt_reqtoken_remote(). CVS was seg-faulting on systems with hostnames greater then 34 characters Abstract out the routines which manipulate the mountlist. NULL-out two stack-declared variables that were incorrectly assumed to Get rid of VPLACEMARKER and retool vmntvnodescan() to use a secondary Make the kernel malloc() and free() MP safe by pushing the BGL past the Document the rwlock routines as being MP safe. Add some convenient targets for cvs maintanance by the cvs meister. Make access to basetime MP safe and interrupt-race safe by using a simple Fix the structural type for kern.boottime and kern.basetime. Fix basetime Initial commit for the DragonFly home-made ntpd client. Why? Because Correct some packet ntohs/htons issues. Fix a documentation bug. Cleanup compiler warnings. Generate cleaner debug output. Change the Fix an overflow in the delta time calculation. A double as incorrectly Allows 16 samples with a correllation >= 0.96 in addition to the Change client_check() to calculate the best offset and the best frequency Implement a variable polling rate capability. Don't issue a frequency correction if it is the same as the last Implement a course offset adjustment for large time steps. Add information on interrupt preemptions by the current thread on any given Make the trace code a bit smarter. It now looks around for something Add config file support, daemon() backgrounding, pidfile, Redo the way the thread priority is displayed. Split out the critical Fix an SMP bug. The SMP startup code waits for the APs to finish Minor rearrangement of an mpcount. This isn't actually a bug because the On MP systems, malloc's using R_NOWAIT are not supposed to block, so don't Add -n (not for real) option. The program goes through motions, but Do not try to collect offset data if a prior offset correction is still Correct a bug in the last two commits. The time_second global was not Implement -s/-S (do quick course adjustment on startup). Add a manual page, clean up the options display, and link dntpd into the Clean up the manual page, correct spelling, add additional information. Final forced commit to correct log comments. char's are *signed*, not Report on segmentation violations (from testing file mmap) rather then A program which sets up a TLS segment and then loops with random sleeps Add a binary library compatibility infrastructure. Library updates are The library compat Makefile needs 'tail'. If a process does not block between setting up %gs and fork()ing, a Adjust the upgrade target to remove libraries from /usr/lib that exist in incore() is used to detect logical block number collisions, and other Initial commit of the generation utility and configuration files for the Give the DEVELOPMENT branch a subversion so we can do build/installworld Do not allow installworld to proceed if the currently running kernel Major TLS cleanups. Document additional work in last commit. Bumped library to ld-elf.so.2 Add NEWBUS infrastructure for interrupt enablement and disablement. This Have the EM device call the new interrupt enablement and disablement Do better range checking on the LDT. FreeBSD-SA-05:07.ldt Get rid of bus_{disable,enable}_intr(), it wasn't generic enough for Get rid of the bad hack that was doing network polling from the trap code. (add missing file related to last commit) Rewrite the polling code. Instead of trying to do fancy polling enablement Fix a bugaboo in the last commit. We can't move to a generic mux yet Change ifconfig 'poll' to ifconfig 'polling', the same keyword that it Get rid of IFCAP_POLLING for now. Properly conditionalize a call to ether_poll_deregister via Fix a race in the serializer's use of atomic_intr_cond_enter(). When Remove some debugging that crept int othe last commit. Fix a bug in the serializer's race detection code. It is possible for Properly initialize the serializer by calling lwkt_serialize_init(). Implement a new cputimer infrastructure to allow us to support different More cleanups, add the API implementation to select the system clock. Disable the ability to change the system clock with a sysctl. More Use the ACPI timer as the system clock if possible. This should free up Add a simple API tha allows the interrupt timer to efficiently convert Fix a recursive clock_lock() on SMP systems which was deadlocking on boot. Fix a bugaboo in the last commit. The wildcard patterns were not accounting Only do the OS version requirements check if DESTDIR is non-existant, Clean up type-o's. Remove spl*() calls from the bus/ infrastructure, replacing them with Conditionalize an #include so libcam can build the file. Remove spl*() calls from the atm code, replacing them with critical sections. Remove spl*() calls from the crypto device driver, replacing them with Conditionalize thread2.h so /usr/src/usr.bin/fstat can include vm_object.h. Remove variable names from procedure declarations so userland doesn't Fix a warning by conditionalizing a forward declaration. After some thought, replace the splhigh()/spl0() combination in swapout() Remove spl*() calls from the netproto/atm driver, replacing them with Remove spl*() calls from netinet, replacing them with critical sections. Make -I only apply to rm's run in the foreground. Silently discard it if Remove spl*() calls from net/i4b, replacing them with critical sections. Remove spl*() calls from i386, replacing them with critical sections. cpu_mb2() needs to load %eax with $0 before running cpuid. Add a sysctl, debug.use_malloc_pattern, that explicitly initializes data Augment the panic when attempting to switch from a FAST interrupt to include vm_contig_pg_free() must busy the page before freeing it in the case Replace cpu_mb1() and cpu_mb2() with cpu_mfence(), cpu_lfence(), cpu_sfence(), The acpi module was failing to load due to exposed crit_*() functions. Add Remove spl*() calls from kern, replacing them with critical sections. Add some missing #include's from the last commit. Get rid of an unused variable due to the last commit. Reorder code in m_chtype() to properly decrement the mbtype stats. Before Temporary hack to fix interrupt race when decrementing a shared Handle the case where the version file might be empty, which can occur Rollup mbuf/objcache fixes. Add a missing lwkt_reltoken() in the NULL return path. Do not count NULL Fix a bug in the mbstats accounting. m_mbufs was being decremented for Attempt to avoid a livelocked USB interrupt during boot by not enabling Another terrible hack to leave interrupts disabled until the USB bus Replace SPLs with critical sections in the opencrypto code. spl->critical section conversion. spl->critical section conversion. spl->critical section conversion. spl->critical section conversion. spl->critical section conversion spl->critical section conversion. spl->critical section conversion, plus remove some macros which are now spl->critical section conversion. spl->critical section conversions. spl->critical section conversion. Add missing #include <thread2.h> to support the critical section calls. Move sys/buf2.h and sys/thread2.h into the #ifdef _KERNEL section. Fix mismatched crit_*() pair. Add additional sanity checks, remove unused arguments to vm_page_startup(). Remove illegal parens from #ifndef. The callout timer init code was using { SI_SUB_CPU , SI_ORDER_FIRST }. Rip out bad spl manipulating junk from mpt and clean it up. spl->critical section conversion. spl->critical section conversion. Also fixes a missed spl in DGM. spl->critical section conversion. spl->critical section conversion. Remove all remaining SPL code. Replace the mtd_cpl field in the machine When cleaning an mbuf header for reinsertion into the objcache, make sure Abstract out the location of an m_tag's data by adding a m_tag_data() inline. Fix two bugs in the LWKT token code. Add more magic numbers for the token code. Fix a bug in the physmap[] array limit calculation and rewrite portions of Add a missing crit_exit(). The code path in question only got executed Fix a serious SMP bug. The RWLOCK support used by dev/raid/aac, Tokens are recursive in the context of the same thread. This also means Add a DEBUG_TOKENS option which causes token operations to be logged to Introduce an ultra-simple, non-overlapping, int-aligned bcopy called bcopyi(). Reimplement the kernel tracepoint facility. The new implementation is Use the KTR facility to trace token operations. Add a caller backtrace feature (enabled by default), which records part of Use KTR's built-in call chain recording rather then hacking it up ourselves. Include a bitmap of allocated entries when built with INVARIANTS. I Add KTR support to the slab allocator. Track malloc's, free's, oversized Correct a missing macro element for the NON-KTR case. Correct KTR masks for memory logging. Rewrite a good chunk of the ktrdump utility to work with the new DragonFly Add additional sanity checks to IPIQ processing, do some cleanups, Add KTR support to the IPIQ code. Have ktrdump run 'nm' on the kernel execfile and translate the caller1,2 Bump fd_lastfile, freefile, and refcnt to 32 bit ints. Also bump cmask File descriptor cleanup stage 2, remove the separate arrays for file Synchronize the fstat program with recent file descriptor cleanups. Synchronize libkcore with recent file descriptor cleanups. Synchronize the ipfilter contrib code with recent file descriptor cleanups. Randomize the initial stack pointer for a user process. Introduce a Document cleanvar_enable in rc.conf.5 and document the purge code The recent file descriptor work is significant enough to deserve a Fix a race between fork() and ^Z. If the ^Z is handled just as the forked Do a quick cleanup pass on the userland scheduler and move resetpriority() Repo-copy kern_switch.c to usched_4bsd.c, remove kern_switch.c, and point Remove unused variables (from prior spl->critical section conversion) Associate a userland scheduler control structure with every process and Cleanup indentation, no operational changes. Move remaining scheduler-specific functions into the usched abstraction. Move more scheduler-specific defines from various places into usched_bsd4.c * Remove a procedural layer in the scheduler clock code by having Print out additional information for a magic number failure assertion. Allow the CTYPE macros to be disabled, forcing procedure calls to be used Remove an assertion that does not compile due to a lack of a KERNLOAD The recent commit to propogate kernel options to modules enabled the Yet more scheduler work. Revamp the batch detection heuristic and fix a few Re-commit usched_bsd4.c (losing the history) to try to fix a repository Do not abort the entire core dump if VOP_VPTOFH() fails. VPTOFH is not The pipe code was not properly handling kernel space writes. Such writes Fix a few issues in the kernel-side journal. The size of a nesting record may not be known (due to the virtual stream Major continuing work on jscan, the userland backend for the journaling * Fix a number of alignment errors that was causing garbage to be parsed. Remove some debugging printfs and fix a bug where libc's fread() returns Generate the correct referential data when journaling hardlinks. Add support for mirroring symlinks and hardlinks. Revert the last commit until a better solution can be found, it breaks Add another argument to fp_read() to tell the kernel to read the entire Implement the full-duplex ack protocol. refurbish some of the memory Bring mountctl up-to-date with memory fifo statistics structural changes. Add an option and test implementation for the full-duplex ack protocol. Work around a ctype bug when displaying printable characters in the It is not acceptable to index the array out of bounds if an illegal index Adjust the inline to take a pointer to a constant array to avoid a Add missing m_freem() in BPF if the mbuf exceeds the interface mtu. Make shutdown() a fileops operation rather then a socket operation. Add journaling restart support, required to produce a robust journaling Check for a free-after-send case and panic if detected. For now just use lwkt_gettoken() rather then a trytoken/gettoken combination. Add some missing crit_exit()'s. The original code just assumed that the Move a mistaken crit_exit() into a crit_enter(), which was panicing the Reorder the INVARIANTS test in crit_enter() to occur prior to modifying * Fix a bug that could cause dc_stop() to try to m_freem() something that's KTR_MALLOC should be KTR_MEMORY When a usb mass storage device is removed the related CAM SIM structure is DELAY() is a spin loop, we can't use it any more because shutdown Add KTR support for usb_mem to trace usb-related allocations. Fix numerous extremely serious bugs in OHCI's iso-synchronous code. I'm Fix a MP lock race. The MP locking state can change when lwkt_chktokens() If a fatal kernel trap occurs from an IPI or FAST interrupt on a cpu not Interlock panics that occur on multiple cpus before the first cpu is able to Limit switch-from-interrupt warnings to once per thread to avoid an endless Add some additinal targets to allow elements of a buildworld to be Add some debugging code to catch any dirty inodes which are destroyed Add some conditionalized debugging 'PANIC_DEBUG', to allow us to panic a When a cpu is stopped due to a panic or the debugger, it can be in virtually Support disablement of chflags in a jail, part 1/2. Only compile in lwkt_smp_stopped() on SMP builds. Additional work to try to make panics operate better on SMP systems. Fix a critical bug in the IPI messaging code, effecting SMP systems. In Fix a sockbuf race. Currently the m_free*() path can block, due to Do not compile the kernel with the stack protector. I've decided to tolerate Support disablement of chflags in a jail, part 2/2. This actually isn't Add a missing FREE_LOCK() call. Add missing crit_exit(). Add a new kernel compile debugging option, DEBUG_CRIT_SECTIONS. This fairly Add a missing crit_exit(). If multiple processes are being traced and some other process has a write There is a case when B_VMIO is clear where a buffer can be placed on the Stephan believes that this patch, just committed to FreeBSD, may fix Bump the development branch sub-version from 1.3.2 to 1.3.3, indicating Add a new system config directive called "nonoptional" which specifies Port a major reworking of the way IPS driver commands are managed from Fix a bug in the last commit. When using the UFS dirent directly, Add a sanity check for the length of the file name to vop_write_dirent(). Fix a race in rename when relocking the source namecache entry. Since we Fix an inode bitmap scanning bug. Due to an error in the length adjustment UFS sometimes reports: 'ufs_rename: fvp == tvp (can't happen)'. The case Convert RANDOM_IP_ID into a sysctl. Have vidcontrol set the video history size based on a new rc.conf variable, Instead of resetting the video buffer's history size to the system Add a TTY_PATH_MAX limit, set to 256, and reduce the size of the ttyname Reduce the buffer size for the threaded version of ttyname() to TTY_PATH_MAX. Filesystem journaling. Reorganize the journal scan for the mountpoint to When writing UNDO records, only try to output the file contents for VREG Implement FSMID. Use one of the spare 64 bit fields in the stat structure Reduce critical section warnings for AHC when critical section debugging Fix a serious bug in cache_inefficient_scan() related to its use of Only include thread2.h for kernel builds (its macros are used by vm_page.h's Add a typedef ufs1_ino_t to represent inodes for UFS1 filesystems. Dump and restore need to use a UFS-centric inode representation. Convert Bump the development sub-version to 1.3.5. Require HEAD users to upgrade to 1.3.5 before running installworld, due Fix a deadlock in ffs_balloc(). This function was incorrectly obtaining a Use a typedef that already conveniently exists instead of anonymous Get rid of smp_rendezvous() and all associated support circuitry. Move Make sure the vnode is unlocked across a dev_dclose() call, otherwise a Add a missing crit_exit(), fixing a panic. Attempt to continue with the Print out a little more information on the PXE boot configuration. Remove old #if 0'd sections of code, add a few comments, and report a bit Compile up both the NFS and the TFTP version of the PXE loader and Give the kernel a native NFS mount rpc capability for mounting NFS roots by Cleanup the module build and conditionalize a goto label. Revert the very last commit to ehci.c (1.12). It was locking the system Merge the following revs from NetBSD (in an attempt to bring new material 1.101 Update code comments. Rework and expand the algorithms in JSCAN, part 1/2. Implement a new Rework and expand the algorithms in JSCAN, part 2/?. * Generally change NOXXX to NO_XXX, similar to work done in FreeBSD. Rework and expand the algorithms in JSCAN, part 3/?. Document a special case for Journaling PAD records. PAD records have to Rework and expand the algorithms in JSCAN, part 4/?. Rework and expand the algorithms in JSCAN, part 5/?. Slightly reorganize the transaction data. Instead of placing the REDO data Rework and expand the algorithms in JSCAN, part 6/?. Syntax cleanup, add a code comment, add a newline in a bootverbose Fix isa_wrongintr. The APIC vector was being directly assigned to a C DragonFly's malloc only guarentees X alignment when X is a power of 2, Fix a serializer bug. The SMP serializer would in some cases fail to Fix a token bug. A preempting interrupt thread blocking on a token cannot Cleanup minor syntax/informational issues. Rename all the functions and structures for the old VOP namespace API Cleanup a couple of serious issues with vinum. Bump the development sub-version to 6, covering the following major changes Add -d /cvs to tag update targets. Bump __Dragonfly_version to 130006 (+ slip preview on the file) Reserve the same amount of space for the spinlock structure whether we are Split spinlock.h into spinlock.h and spinlock2.h so we can embed spinlocks in Add an option, -y, which displays the 64 bit FSMID for a file or directory. Add an argument to vfs_add_vnodeops() to specify VVF_* flags for the vop_ops With the new FSMID flag scheme we can optimize the update of the chain by Using the ACPI idle hook while shutting down ACPI during a halt or reboot Fix the infinite-watchdog timeout problem. the pending_txs count was not Re-initialize the interrupt mask on ACPI wakeup. This seems to Because recalculate is only called once or twice for long sleeps, Do not attempt to modify read-only-mounted filesystems in ufs_inactive(). Add a sysctl, kern.unprivileged_read_msgbuf (defaults to enabled) which if Implement sysctls to restrict a user's ability to hardlink files owned by Allow the target safety check to be overridden. May be necessary in certain Ensure that FP registers are not used for integer code. Update subvers to 7 so we can sync up the Preview tag prior to Simon Bump subversion in param.h Add missing crit_exit(). If VR fails to attach the system will assert Attempt to add generic GigE support to MII. If this creates issues we will Properly serialize access in the NV ethernet driver and attempt to fix Remove the INTR_TYPE_* flags. The interrupt type is no longer used to Move a bunch of per-interrupt-thread variables from static storage to Major cleanup of the interrupt registration subsystem. Use sysctl's instead of KVM to access the interrupt name and count list. Fix a bug where fsetfd() was not returning an error when it failed, Move the polling systimer initialization code out of kern_clock.c and into Fix a bad panic check from the last commit in register_randintr(). FreeBSD commit message: Bring another softupdates fix in from FreeBSD, FreeBSD commit message: Implement an emergency interrupt polling feature. When enabled, most Display all IOAPIC pin assignments when bootverbose is specified, not MPTable fixup for Shuttle XPC with an AMD Athlon X2 - there is no entry Only resynchronize the RTC on shutdown if we had previously loaded it and Do a run through of the fragment allocation and freeing code, documenting Oops, fix the polling enable name, it's supposed to be kern.polling.enable, Only check GiGE related flags in the generic code when MIIF_IS_1000X is set Add an mii_flags field to the attach arguments, to make it easier to create Pass mii_attach_args to mii_softc_init() rather then initializing the softc's Add two checks for potential buffer cache races. Document kern.emergency_intr_{enable,freq} in loader.8. Temporarily work around a race in the kernel. The kernel does a sanity check Avoid a recursive kernel fault and subsequent double fault if the VM fault Add a missing BUF_UNLOCK in the last commit. Remove the dummy IPI messaging routines for UP builds and properly Temporary hack until corecode can fix it. There is a p_rtprio and also Redo the interrupt livelock code. Simplify the frequency calculations Fix a long-standing bug in the livelock code. An interrupt thread normally Fix a bug in the ppbus code where an interrupt cookie might be torn down Assert that the vnode is locked when modifying an inode inside ffs_balloc. Add a redundant double-check in ffs_reallocblks and assert that the number Increase the MSGBUF_SIZE from 32K to 64K, boot verbose messages don't Cleanup some of the newbus infrastructure. Add another parameter to BUS_ADD_CHILD to allow children to inherit An exclusive lock on the vnode is required when running vm_object_page_clean(), Move the freebsd package system from /usr/sbin to /usr/freebsd_pkg/sbin add an acpi_enabled() function, invert the "pci" logic and require that Bump config to 400022. Added an 'arch' softlink. 'machine' goes into ICU/APIC cleanup part 1/many. ICU/APIC cleanup part 2/many. ICU/APIC cleanup part 2/many. ICU/APIC cleanup part 3/many. ICU/APIC cleanup part 4/many. ICU/APIC cleanup part 5/many. ICU/APIC cleanup part 6/many. ICU/APIC cleanup part 7/many. Make rndcontrol use the new ioctl so it can properly list interrupt sources Reimplement IPI forwarding of FAST interrupts to the cpu owning the BGL ICU/APIC cleanup part 8/many. Add more documentation for the APIC registers and rename some of the De-confuse the IO APIC mapping code by creating a helper procedure to ICU/APIC cleanup part 9/many. ICU/APIC cleanup part 10/many. Be a lot more careful programming the IO APIC. Display warnings for any configured IO APIC pins that do not actually exist. Fix a bug in the last commit. The wrong argument was being passed to We are already serialized when nv_ospackettx() is called, so it must Allow 'options SMP' *WITHOUT* 'options APIC_IO'. That is, an ability to Fix a symbol not found problem by not including madt.c in the ACPI module. Switch to the BSP when doing a normal shutdown. ACPI can't power the machine Fix the cpu the softclock thread(s) are created on. The softclock threads Fix a comment. The slave is connected to IRQ 2 on the first 8259, not Mark our fine-grained interrupt timer as being INTR_MPSAFE, because it is. The 'picmode' variable was mis-named. The MPTable is actually simply ICU/APIC cleanup part 11/many. Solve the continuous stream of spurious IRQ 7's that occur on machines Make sure that the apic error, performance counter, and timer local When operating in SMP+ICU mode, try to disconnect the 8259 from the cpu Clean up the CPU_AMD64X2_INTR_SPAM option to check the cpu_id and provide Document CPU_AMD64X2_INTR_SPAM in LINT. Minor manual adjustment to add debug.acpi.enabled. Adjust the globaldata initialization code to accomodate globaldata Make tsleep/wakeup MP SAFE part 1/2. Turn around the spinlock code to reduce the chance of programmer error. Revert part of the last commit. We aren't ready for the per-cpu _wakeup USB mouse fix for certain mice, such as the Logitech LX700. Do not assume Fix a broken array lookup in the old 4.3 BSD mmap compatibility code Make tsleep/wakeup() MP SAFE for kernel threads and get us closer to Temporarily check for and correct a race in getnewbuf() that exists due Continue work on our pluggable scheduler abstraction. Implement a system If a /dev/<disk> device node is fsynced at the same time the related Remove ancient interrupt handler hacks that are no longer needed. Do a better job formatting vmstat -i output. Output entries that look like Convert the lockmgr interlock from a token to a spinlock. This fixes a Protect allproc scans with PHOLD/PRELE, in particular to support the Fix a bug in the last commit. The proc pointer can be NULL at the Remove inthand_add() and inthand_remove(). Instead, register_int() and Add a thread flag, TDF_MPSAFE, which is used during thread creation to Add a sysctl and tunable kern.intr_mpsafe which allows threaded interrupts Start working on making the printf() path MPSAFE, because it isn't at the Add a sysctl and tunable kern.syscall_mpsafe which allows system calls Followup to last commit, cleanup some SMP/UP conditionals. Fix the design of ifq_dequeue/altq_dequeue by adding an mbuf pointer and Wrap psignal() and a few other places that require the MP lock when Add a sysctl and tunable kern.trap_mpsafe which allows some traps to run Remove unused label. Document the fact that the vm86 instruction emulator is MPSAFE. Do not try to set up hardware vectors for software interrupts. Consolidate the initialization of td_mpcount into lwkt_init_thread(). Add a lwkt_serialize_try() API function. Fix some minor bugs in lwkt_serialize_handler*() which upcoming code will Add the ips driver to GENERIC. Assert that he mbuf type is correct rather then blinding setting m_type. Fix a mbuf statistics bug. tcp_syncache.cache_limit is a per-cpu limit, reserve enough space for all Make all network interrupt service routines MPSAFE part 1/3. Properly serialize IPW. ipw (is the only driver that) needs a working interrupt to perform Jumbo mbuf mangement's extbuf callbacks must be MPSAFE. Use a serializer Get rid of the p_stat SZOMB state. p_stat can now only be SIDL, SSLEEP, or tsleep_interlock() must be called prior to testing the serializer lock The primary mbuf cluster management code needs to be made MPSAFE since For MPSAFE syscall operation, CURSIG->__cursig->issignal() may be called Do not hold the ifnet serializer when entering tsleep() in the PPP TUN Change the initial path from /bin:/usr/bin to /bin:/usr/bin:/sbin:/usr/sbin cred may be NULL due to a prior error code. crhold() handles NULL creds, Fix a bug in the big tsleep/wakeup cleanup commit. When stopping a Fix a process exit/wait race. The wait*() code was making a faulty test Reduce SCSI_DELAY in GENERIC from 15 seconds to 5 seconds. Fix a bogus proc0 test that is no longer accurate. This should allow the Add support for DLink 528(T) Gigabit cards. Add /usr/pkg/[s]bin to /bin/sh's default path and login.conf's default path. Require pkgsrc to be installed, include pkgsrc bootstrap binaries and mk.conf Update more default cshrc/profiles with new paths. Fix a bug in our RB_SCAN calls. A critical section is required to Sync up misc work on the currently inactive syscall mapping library. Add an option -b which dumps the clean/dirty buffer cache RB trees for Document the nfs dirent conversion code. No functional changes. doreti and splz were improperly requiring that the MP lock be held in order The new lockmgr() function requires spinlocks, not tokens. Take this Synchronize the TSC between all cpus on startup and provide a sysctl, Enhance ktrdump to generate relative timestamps in fractional microseconds, Add KTR_TESTLOG and debug.ktr.testlogcnt, which issues three ktrlog() calls Add additional KTR lines to allow us to characterize the total overhead Clean up more spinlock conversion issues and fix related panics. SB_NOINTR must be set for the socket to prevent nbssn_recv() from hard Fix the directory scan code for SMBFS. It was losing track of the directory Beef up error reporting for a particular assertion to try to track down a Do not ldconfig -m /usr/lib/gcc2 if there is no /usr/lib/gcc2 directory. Fix UP build issues. Move tsc_offsets[] from mp_machdep.c to kern_ktr.c, Make the KTR test logging work for UP as well as SMP. Add KTR logging for tsleep entry/exit and wakeup entry/exit. Add KTR logging for IF_EM to measure interrupt overhead and packet Don't display the file and line by default. Fix another interesting bug. td_threadq is shared by the LWKT scheduler, Add a feature and a sysctl (debug.ktr.testipicnt) to test inter-cpu Remove the 'all' target to fix buildworld, and fix a Move tsc_offsets[] to a new home to unbreak kernel builds, again. pfnStop() seems to take a flags argument which as far as I can tell from Properly integrate the now mandatory serializer into the WI network driver. Add KTR logging to the core tcp protocol loop. Fix a number of panic conditions for network cardbus devices by implementing Add a feature that allows a correction factor to be applied to attempt Clean up some minor typos in comments. Fix numerous nrelease build issues related to the pkgsrc conversion. run ./bootstrap in a chroot so it configures the correct paths in the Cleanup minor typeos. Bump us to 1.3.8 in preparation for pre-release tagging. By the time list_net_interfaces() is called in /etc/rc.d/netif, clone_up() 1.4 Release branched, HEAD is now 1.5. Fix the installer_quickrel target. Also fix the pkgsrc bootstrap, not A ^Z signals the whole process group, causing the parent process (vipw) to Fix a type-o that was causing the wrong mbuf's csum_data to be adjusted After much hair pulling the problem with dual BGE interfaces not coming up Make wait*() behave the same as it did before we moved TSTOP handling Add a 2 second delay after configuring interfaces before continuing. Add a cvsup example to track the 1.4 release. Give up trying to port ezm3, add a cvsup binary bootstrap package to Do not require a .sh extension for scripts in local_startup dirs, ether_input() no longer allows the MAC header to be passed separately, The stat and ino_t changes were not intended to break dump/restore Do not call ether_ifdetach() with the serializer held in IF_TAP. Weed out files with .sample, .orig, or .dist extensions. Proceduralize Preliminary ndis cleanup. The serializer has taken over the functionality Finish fixing NDIS serialization. Wrap detach in the serializer and remove Add a target that will update the 1.4-release slip tag. Mostly fix nullfs. There are still namespace race issues between Clean up unmount() by removing the vnode resolution requirement. Just bx is supposed to point to twiddle_chars, not contain the first element Correct sizeof(pointer) bugs that should have been sizeof(*pointer) Add mk.conf to the ISO and have the installer install it in /etc Use the DragonFly contrib patch system to correct improper sizeof(pointer) Add the '-c cpu' option to arp, netstat, and route, to allow the route Switch the type and how argument declarations around to match what the The random number generator was not generating sufficient entropy by Add a missing #include <sys/lock.h> to fix a UP kernel build problem. Lobotomize libcaps so it compiles again and can be used by the code Bring in the parallel route table code and clean up ARP. The Remove the VBWAIT flag test, it will soon go away. Remove serialization calls that are no longer correct, fixing a panic Make warn() a weak reference. Bring in a bunch of malloc features from OpenBSD and fundamentally change Get rid of seriously out of date example code. Properly assert the state of the MP lock in the async syscall message Make the entire BUF/BIO system BIO-centric instead of BUF-centric. Vnode Fix a bunch of race cases in the NFS callout timer code, in the handling Reformulate some code which was #if 0'd out in the last patch. When Reduce the default NFSv3 access cache timeout from 60 seconds to 10 seconds. bioops.io_start() was being called in a situation where the buffer could Change the server side NFS write gather delay from 10ms to 20ms. This nvi's "memory use after free" bug exists in following call path: Do not set the pcb_ext field until the private TSS has been completely Properly check for buffered data in ukbd_check_char(). This fixes issues A thread may be freed from a different cpu then it was assigned to, dvp must be unlocked prior to issuing a VOP operation to avoid obtaining Pass LK_PCATCH instead of trying to store tsleep flags in the lock buftimespinlock is utterly useless since the spinlock is released vfs_bio_awrite() was unconditionally locking a buffer without checking Add additional red-black tree functions for fast numeric field lookups. Unlock vnodes prior to issuing VOP_NREMOVE to accomodate filesystem Replace the global buffer cache hash table with a per-vnode red-black tree. Roll 1.5.1 for the slip tag Remove two incorrect serializer calls in the NDIS code. Struct buf's cannot simply be bcopy'd any more due to linkages in the cluster_read() was very dangerously issuing a blind BMAP for a buffer Add missing block number assignment in ATA raid mirroring code. The Change KKASSERT() to not embed #exp in the control string. Instead pass cache_fromdvp() uses a recursive algorithm to resolve disconnected Implement a VM load heuristic. sysctl vm.vm_load will return an indication Cleanup the copyright in this file. Easy since its my code. Add an option to add a slight time skew to the execution of scripts to Fix a serious bug in the olddelta microseconds calculation returned by Prevent the driver from reinitializing the card when it's already running. strnstr() was testing one byte beyond the specified length in certain Make a slight adjustment to the last commit. Change saved_ncp to saved_dvp Backout the rest of 1.29. There are a number of issues with the other Bump 1.5.2 for the preview tag, synchronized to just before the Add missing commit for the VM load heuristic and page allocation rate Major BUF/BIO work commit. Make I/O BIO-centric and specify the disk or Don't just assume that the directory offset supplied by the user is Fix numerous translation problems in msdosfs, related to the recent BUF/BIO Sync fstat up to the changes made in msdosfsmount.h. Sanitize status message. Add options to allow the dump offset or system memory size (for the purposes Undo the last commit. At the moment we require access to the structure Add the initialization of blockoff back in, the variable is still used. Clean up the extended lookup features in the red-black tree code. Change *_pager_allocate() to take off_t instead of vm_ooffset_t. The Remove NQNFS support. The mechanisms are too crude to co-exist with Correct some minor bugs in the last patch to fix kernel compilation. Add a RB_PREV() function which returns the previous node in a red-black Add PCI IDs for Intel's ICH7 and ICH7M ATA/SATA hardware, used in Remove VOP_GETVOBJECT, VOP_DESTROYVOBJECT, and VOP_CREATEVOBJECT. Rearrange A VM object is now required for vnode-based buffer cache ops. This NFS needs to instantiate a backing VM object for the vnode to read a symlink. ffs_truncate(), called from, truncate(), remove(), rmdir(), rename-overwrite, Fix a race condition between nlookup and vnode reclamation. Even though the Use the vnode v_opencount and v_writecount universally. They were previously Clone cd9660_blkatoff() into a new procedure, cd9660_devblkatoff(), which Require that *ALL* vnode-based buffer cache ops be backed by a VM object. Give the MFS pseudo block device vnode a VM object, as is now required ufs_dirempty() issues I/O on the directory vnode and needs to make sure A floating point fault (instead of DNA fault) can occur when the TS bit Document the use of SDT_SYS386IGT vs SDT_SYS386TGT when setting up the A number of structures related to UFS and QUOTAS have changed name. A number of structures related to UFS and QUOTAS have changed name. A number of structures related to UFS and QUOTAS have changed name. ufs_readdir() can be called from NFS with the vnode being opened, create Because multiple opens of /dev/tty only issue one actual open to the Properly calculate the ronly flag at unmount time. Transplant all the UFS ops that EXT2 used to call into the EXT2 tree and Unconditionally initialize a VM object for a directory vnode. Continue Synchronize vinitvmio() calls from UFS to EXT2FS. Due to continuing issues with VOP_READ/VOP_WRITE ops being called without Get rid of bogus 'pushing active' reports. Initialize a VM object for VREG Followup last commit, fix missing argument to vinitvmio(). Fix one place where the superblock was being read (and written) at the Remove debugging printfs. Calculate the correct buffer size when reading a symlink via NFS. in_ifadown() was only cleaning up the route table on the originating cpu, /dev/random was almost always returning 0 bytes. This was due to several Note: the previous rev's CVS comment was messed up due to an editor snafu. NTFS sometimes splits the initialization of a new vnode into two parts. If a process forks while being scanned, a non-zero p_lock will be inherited Supply version of wakeup() which only operate on threads blocked on the Fix a livelock in the objcache blocking code. PCATCH was being improperly vop_stdopen() must be called when a fifo_open fails in order to then be Fix an edge case where objects can be returned to a per-cpu cache while Always guarentee at least one space between two network addresses. Conditionalize a lwkt_send_ipiq2() to fix the UP build. Fix a bug in the pkg_add -n tests where pkg_add was incorrectly reporting Recent bug fixes make this worthy for testing, update to 1.5.3 and slip the Generate a host-unreachable failure rather then a crash if the MTU is too Bring in some small changes from FreeBSD. Minor typing cleanups for aicasm. Add spin_uninit() to provide symmetry with spin_init(). Misc sysperf cleanups. Add another mutex tester to test the xchgl Get rid of unused arg. Get rid of LK_PCATCH in the agp lockmgr() calls. AGP ignores the return Remove unused code label. Get rid of LK_DRAIN in dounmount(). LK_DRAIN locks are not SMP friendly and LK_DRAIN locks are no longer obtained on vnodes, rip out the check. Run the lockmgr() call independant of the KASSERT() in smb_co_init(). Get rid of LK_DRAIN and LK_INTERLOCK interactions. Recode interlocks when Get rid of LK_DRAIN, rely on nc_lwant to interlock lock races against Remove all remaining support for LK_DRAIN lockmgr locks. LK_DRAIN was a Remove remaining uses of the lockmgr LK_INTERLOCK flag. Remove the now unused interlock argument to the lockmgr() procedure. Remove LK_REENABLE (related to the LK_DRAIN removal). The nticks calculation is still broken. Sometimes the delta systimer Fix an incorrect header length comparison for IPSEC AH. Add required B_INVAFTERWRITE is no longer used, remove it. If softupdates or some other entity re-dirties a buffer, make sure Call vnode_pager_setsize() before BALLOC rather than after. vfsync() is not in the business of removing buffers beyond the file EOF. Add a memory wrap check to kernacc to try to reduce instances of a bogus Rename KVM_READ() to kread() and make it a real procedure. Also incorporate Generate unique identifiers for simulated FSMIDs so any errors appear to Separate the MD5 code into its own module. Get rid of the weird FSMID update path in the vnode and namecache code. Add the preadv() and pwritev() systems and regenerate. Fix the range checking for all read and write system calls. Fix the Get rid of libcr, the idea(s) behind it are not really applicable anymore Move most references to the buffer cache array (buf[]) to kern/vfs_bio.c. Fix a bug in the POSIX locking code. The system could lose track of Remove the buffer cache's B_PHYS flag. This flag was originally used as Get rid of the remaining buffer background bitmap code. It's been turned Remove non-existant variable from debugging message. Get rid of pbgetvp() and pbrelvp(). Instead fold the B_PAGING flag directly Bring in some fixes from NetBSD: Bring in SHA256 support from FreeBSD. Replace the the buffer cache's B_READ, B_WRITE, B_FORMAT, and B_FREEBUF Remove b_xflags. Fold BX_VNCLEAN and BX_VNDIRTY into b_flags as The pbuf subsystem now initializes b_kvabase and b_kvasize at startup and Remove buf->b_saveaddr, assert that vmapbuf() is only called on pbuf's. Pass Plug xform memory leaks. Don't re-initialize an xform for an SA that m_cat() may free the mbuf on 2nd arg, so m_pkthdr manipulation 32bit from 64bit value fixup. Fix typo. Fix fencepost error causing creation of 0-length mbufs when more strict sanity check for ESP tail. [From KAME] 32bit from 64bit value fixup. Supply a stack pointer for a pure thread context so backtrace works. Plug memory leak in umass. - Add workarounds for dropped interrupts on VIA and ATI controllers. The wrong pointer was being used to calculate the page offset, leading Fix a bug in close(). When a descriptor is closed, all process leaders Fix an information disclosure issue on AMD cpus. The x87 debug registers, Fix a biodone/AR_WAIT case. b_cmd was not getting set to BUF_CMD_DONE, Add a missing ohci_waitintr() call that allows polled operation of Minor cleanup, plus initialize a few additional fields in the proc Remove the accounting argument from lf_create_range() and lf_destroy_range(). Invert a mistaken test. Set b_resid to 0 if B_ERROR is not set. Document the handling of a file holes in ufs_strategy() and clean up - Clarify the definitions of b_bufsize, b_bcount, and b_resid. Block devices generally truncate the size of I/O requests which go past EOF. Cleanup procedure prototypes, get rid of extra spaces in pointer decls. Remove VOP_BWRITE(). This function provided a way for a VFS to override Remove the thread pointer argument to lockmgr(). All lockmgr() ops use the Simplify vn_lock(), VOP_LOCK(), and VOP_UNLOCK() by removing the thread_t Remove the thread_t argument from vfs_busy() and vfs_unbusy(). Passing a The thread/proc pointer argument in the VFS subsystem originally existed Add some ifioctl() td -> ucred changes that were missed. The fdrop() procedure no longer needs a thread argument, remove it. Remove the thread_t argument from nfs_rslock() and nfs_rsunlock(). Remove the thread argument from ffs_flushfiles(), ffs_mountfs(), Remove the thread argument from ext2_quotaoff(), ext2_flushfiles(), Remove the thread argument from all mount->vfs_* function vectors, Fix a null pointer indirection, the VM fault rate limiting code only We have to use pmap_extract here, pmap_kextract will choke if the page We have to use pmap_extract() here. pmap_kextract() will choke on a missing We have to use pmap_extract() here. If we lose a race against page Recode the streamid selector. The streamid was faked before. Do it for lockmgr_kernproc() wasn't checking whether the lockholder as already Remove the internal F_FLOCK flag. Either F_POSIX or F_FLOCK must be set, Add a little program that allows one to test posix range locks. Rewrite the POSIX locking code. It was becomming impossible to track Split kern/vfs_journal.c. Leave the low level journal support code in Recognize the cpu ident for additional VIA processors. Fix three bugs in the last commit and document special cases. Tighten UMAPFS has been disabled (and non-working) for a long time. Scrap it Most of the fields in vnodeop_desc have been unused for a while now. Remove mount_umap. cbb_probe() assumes that the subclass field is unique. This patch further Remove vnode lock assertions that are no longer used. Remove the Attempt to interlock races between the buffer cache and VM backing store Remove the (unused) copy-on-write support for a vnode's VM object. This Pass the process (p) instead of the vnode (p->p_tracep) to the kernel tracing The ktracing code was not properly matching up VOP_OPEN and VOP_CLOSE calls. Oops, last commit was slightly premature. Fix a bug-a-boo and remove Add another mutex tester for Jeff's spinlock code w/ the refcount p_tracep -> p_tracenode, tracking changes made in recent commits. p_tracep -> p_tracenode, tracking changes made in recent commits. Replace the LWKT token code's passive management of token ownership with Make spinlocks panic-friendly. Remove the last vestiges of UF_MAPPED. All the removed code was already Consolidate the file descriptor destruction code used when a newly created Convert most manual accesses to filedesc->fd_files[] into the appropriate Recent lwkt_token work broke UP builds. Fix the token code to operate I'm growing tired of having to add #include lines for header files that Embed the netmsg in the mbuf itself rather than allocating one for Fix a build issue with libnetgraph. net/bpf.h does not need to include Remove so_gencnt and so_gen_t. The generation counter is not used any more. Remove the (unmaintained for 10+ years) svr4 and ibcs2 emulation code. A little script that runs through all the header files and checks that Clean up more #include files. Create an internal __boolean_t so two or Only _KERNEL code can optimize based on SMP vs UP. User code must always Implement a much faster spinlock. Give struct filedesc and struct file a spinlock, and do some initial Do a major cleanup of the file descriptor handling code in preparation for Fix a minor bug in fdcopy() in the last commit, Consolidate the Sync to head. Add a verbose option to vmpageinfo which dumps all the The pageout daemon does not usually page out pages it considers active. Move all the resource limit handling code into a new file, kern/kern_plimit.c. spinlock more of the file descriptor code. No appreciable difference in Start consolidating process related code into kern_proc.c. Implement Move the code that inserts a new process into the allproc list into its Fix issues with an incorrectly initialized buffer when formatting a floppy. When a vnode is vgone()'d its v_ops is replaced with dead_vnode_ops. Modifying lk_flags during lock reinitialization requires a spinlock. Adjust pamp_growkernel(), elf_brand_inuse(), and ktrace() to use Convert almost all of the remaining manual traversals of the allproc Fix several buffer cache issues related to B_NOCACHE. More MP work. * Make falloc() MPSAFE. filehead (the file list) and nfiles are now Add #include <sys/lock.h> where needed to support get_mplock(). * Fix a number of cases where too much kernel memory might be allocated to Remove FFS function hooks used by UFS. Simply make direct calls from ufs Add a read-ahead version of ffs_blkatoff() called ffs_blkatoff_ra(). This Implement msleep(). This function is similar to the FreeBSD msleep() except Greatly reduce the MP locking that occurs in closef(), and remove Clear the new VMAYHAVELOCKS flag when after an unlock we determine that Mark various forms of read() and write() MPSAFE. Note that the MP lock is Get rid -y/-Y (sort by interactive measure). The interactive measure has Further isolate the user process scheduler data by moving more variables Clean up compiler warnings when KTR is enabled but KTR_ALL is not. Remove conditional memory allocation based on KTR_ALL. Allocate memory Add two KTR (kernel trace) options: KTR_GIANT_CONTENTION and Shortcut two common spinlock situations and don't bother KTR logging them. Fix numerous bugs in the BSD4 scheduler introduced in recent commits. gd_tdallq is not protected by the BGL any more, it can only be manipulated Use the MP friendly objcache instead of zalloc to allocate temporary If the scheduler clock cannot call bsd4_resetpriority() due to spinlock Update the manual page to reflect additional spinlock requirements. Another update. Clarify that a shared spinlock can be acquired while holding Since we can only hold one shared spinlock at a time anyway, change the namecache->nc_refs is no longer protected by the MP lock. Atomic ops must Remove vnode->v_id. This field used to be used to identify stale namecache Add an option which dumps the filename from the vnode's namecache link. Fix a file descriptor leak, add a missing vx_put() after linprocfs Rename arguments to atomic_cmpset_int() to make their function more obvious. Fix a bug in the linux emulator's getdents_common() function. The function Misc cleanup - move another namecache list scan into vfs_cache.c An inodedep might go away after the bwrite, do not try to access Fix blocking races in various *_locate() functions within softupdates. Remove LWKT reader-writer locks (kern/lwkt_rwlock.c). Remove lwkt_wait Fix a minor bug in the last commit. lwp_cpumask has to be in the LWP copy Modify kern/makesyscall.sh to prefix all kernel system call procedures Fix a file descriptor leak in cam_lookup_pass() when the ioctl to Fix a WARNS=3 gcc warning related to longjmp clobbers, fix a possible use Remove lwp_cpumask assignment. lwp_cpumask is handled in the bcopy section. Remove an inappropriate crit_exit() in ehci.c and add a missing crit_exit() Add an INVARIANTS test in both the trap code and system call code. The Cleanup crit_*() usage to reduce bogus warnings printed to the console Some netisr's are just used to wakeup a driver via schednetisr(). The Add missing crit_exit() Remove the asynchronous system call interface sendsys/waitsys. It was an Add an option, DEBUG_PCTRACK, which will record the program counter of Add a new utility, 'pctrack', which dumps program counter tracking data Fix namespace pollution. We shouldn't have to fninit to make the FP unit usable for MMX based copies. Move selinfo stuff to the separate header sys/selinfo.h. Make sys/select.h Remove the select_curproc vector from the usched structure. It is used Add kernel syscall support for explicit blocking and non-blocking I/O The pread/preadv/pwrite/pwritev system calls have been renamed. Create Use the _SELECT_DECLARED method to include the select() prototype instead Add two more system calls, __accept and __connect. The old accept() and Well, ok, if you are going to turn off writable strings, then the code Do not set O_NONBLOCK on a threaded program's descriptors any more. Instead, fcntl(.., F_SETFL, ..) should only do an FIOASYNC ioctl if the FASYNC Replace the random number generator with an IBAA generator for /dev/random Replace the random number generator with an IBAA generator for /dev/random Fix a case where RTP_PRIO_FIFO was not being handled properly. The bug led /dev/[k]mem was not allowing access to the CPU globaldata area, because it Swap out FAT12 for NTFS so the boot0 prompt says 'DOS' instead of '??' for Add a new option -H <path> to cpdup. This option allows cpdup to be used Do not attempt to read the slice table or disk label when accessing a raw If we hit the file hardlink limit try to copy the file instead of hardlinking Add a missing initbufbio() to fix a panic when vinum tries to issue a Include kernel sources on the release CD. Use bzip instead of gzip, rename the tar file to make it more obvious that Disassociate the VM object after calling VOP_INACTIVE instead of before. Cleanup, no functional changes. Change the seeder array from a modulo to a logical AND, improving performance Turn on the new kern.seedenable sysctl when seeding the PRNG. Update the manual pages for the kernel random number generator. Correct a problem with the user process scheduler's estimated cpu A broken pipe error is sometimes reported by zcat if the user quits out Add missing prototype. Use pbufs instead of ebufs. Attempt to fix an occassional panic in pf_purge_expired_states() by Add a note on where to find the release engineering document. Bump sub-versions and DragonFly_version in preparation for branching. Add a new target for cvs administration of the 1.6 slip tag. Add a cvsup Adjust HEAD version from 1.5 to 1.7. Add a fairly bad hack to detect ripouts that might occur during a list Remove several layers in the vnode operations vector init code. Declare Get rid of the weird coda VOP function arguments and void casts and Introduce sys/syslink.h, the beginnings of a VOP-compatible RPC-like Check the the ops mount pointer is not NULL before indirecting through it. Add code to dump lockf locks associated with a vnode. Fix a bug where the VMAYHAVELOCKS flag on a vnode may get lost, resulting Make a few more #define's visible when compiling with _KERNEL_STRUCTURES Fix a minor bug that prevented compilation. MASSIVE reorganization of the device operations vector. Change cdevsw For the moment adjust dd to find the new location of the device type Why is ip_fil.h trying to declare kernel procedures for userland #include's? Remove duplicate code line. Fix an incorrect #ifndef label. Also remove a now unnecessary Update the syslink structural documentation. Add syslink_msg.h, containing Get rid of some unused fields in the fileops and adjust the declarations Get rid of a bogus check that cut the blocked-lock wakeup code a little Add structures and skeleton code for a new system call called syslink() Instead of indirectly calling vop_stdlock() and friends, install direct minor syslink cleanups to get the syslink_read() and syslink_write() LK_NOPAUSE no longer serves a purpose, scrap it. Protect the pfshead[] hash table with a token. VNode sequencing and locking - part 1/4. VNode sequencing and locking - part 2/4. VNode sequencing and locking - part 3/4. Update the X11 path for the default login.conf. Add a #define that source code can check to determine that the stat Add some linux compatibility defines, _DIRENT_HAVE_D_NAMLEN and Add a remote host capability for both the source and target directory Generate a nice message and make sure the program exits if we lose a Add a postscript printer filter example using ghostscript for a Remove the coda fs. It hasn't worked in a long time. Fix a memory leak and greatly reduce the memory allocated when remembering Properly update the mtime for directories. VNode sequencing and locking - part 4/4 - subpart 1 of many. Bring in the initial cut of the Cache Coherency Management System module. Fix a case where a spinlock was not being released. Add skeleton procedures for the vmspace_*() series of system calls which Rename functions to avoid conflicts with libc. Rename functions to avoid conflicts with libc. Rename functions to avoid conflicts with libc. Rename functions to avoid conflicts with libc. Rename functions to avoid conflicts with libc. Rename functions to avoid conflicts with libc. Split extern in6* declarations for libc vs the kernel. Create 'k' versions of the kernel malloc API. Rename malloc->kmalloc, free->kfree, and realloc->krealloc. Pass 1 Rename malloc->kmalloc, free->kfree, and realloc->krealloc. Pass 2 Make KMALLOC_ONLY the default, remove compatibility shims for the Remove KMALLOC_ONLY from LINT Move the code that eats certain PNP IDs into a ISA bus-specific file. Clean up module build failures when compiling a kernel without PCI. Fix malloc macros for dual-use header file. Attempt to fix a vnode<->namecache deadlock in NFS's handling of stale Get rid of a struct device naming conflict. Rename struct specinfo into struct cdev. Add a new typedef 'cdev_t' for cdev Rename the kernel NODEV to NOCDEV to avoid conflicts with the userland NODEV. Change the kernel dev_t, representing a pointer to a specinfo structure, Reserve upcall IDs 0-31 for system use. Move flag(s) representing the type of vm_map_entry into its own vm_maptype_t MAP_VPAGETABLE support part 1/3. MAP_VPAGETABLE support part 2/3. MAP_VPAGETABLE support part 3/3. More cleanups + fix a bug when taking a write fault on a mapping that uses Clean up some #include's that shouldn't have been in there. Unbreak Collapse some bits of repetitive code into their own procedures and Fix a bug in sysctl()'s handling of user data. You can't wire 0 bytes Fix a bug in sendmsg() and two compatibility versions of sendmsg(). Fix a bug when '-f -H' is used and the target already exists. cpdup was Bump the version number reported by cpdup from 1.06 to 1.07 Add a README file with some helpful porting hints. I'd rather do this in Commit a comprehensive file describing how to do incremental backups along Make some adjustments to low level madvise/mcontrol/mmap support code to Make some adjustments to low level madvise/mcontrol/mmap support code to Move an assertion in the bridge code so it only gets hit if the bridge Disallow writes to filesystems mounted read-only via NULLFS. In this case Set f_ncp in the struct file unconditionally. Previously we only set it Remove the last bits of code that stored mount point linkages in vnodes. Check that namecache references to the mount point are no longer present Fix a bug in the script softlink code. The softlinks were not being Add an option that causes cpdup to skip CHR or BLK devices. This option Recent dev_t work confused sysctl. Adjust the reported type to udev_t Fix a compile error when DDB is not defined. db_print_backtrace() is Try to clean up any remaining filesystem references when rebooting. Clean Fix an off-by-one error. Track #1 is index 0 in the TOC buffer. Fix a bug in the device intercept code used by /dev/console. The Follow up to kern_conf.c 1.16. We can't just ignore the ops comparison, it Fix a bug where mmap()ing a revoked descriptor caused a kernel panic on a Do not temporarily set signals to SIG_IGN when polling whether the parent Add Marc's monthly statistics script to DragonFly's base dist. These Fix a long-standing bug inherited from FreeBSD. It is possible for a Add two more vmspace_*() system calls to read and write a vmspace. These NULLFS was not releasing a reference to the root of the underlying Correct a compiler warning from the last commit. Add a device that attaches to the memory controller. If ECC is enabled in Greatly reduce memory requirements when fsck'ing filesystems with lots Remove inode free race warning messages. These were originally added to Add a ton of infrastructure for VKERNEL support. Add code for intercepting Reformulate the way the kernel updates the PMAPs in the system when adding Reorganize the way machine architectures are handled. Consolidate the Reorganize the way machine architectures are handled. Consolidate the Add advice if a kernel config file cannot be found to remind people that memset must be a real procedure rather then an indirect pointer because Fix paths to arch/i386, related to the recent architecture topology changes. Get rid of the indirect function pointer for bzero(). We haven't used it Bump the config version. Add a 'cpu_arch' directive that allows the Further normalize the _XXX_H_ symbols used to conditionalize header file Further normalize the _XXX_H_ symbols used to conditionalize header file Purge the IFQ when associating a new altq. Packets that have already been Do a major clean-up of the BUSDMA architecture. A large number of Fix a stack overflow due to recursion. When the namecache must invalidate Major namecache work primarily to support NULLFS. Adjust fstat to properly traverse mount points when constructing a Sync our rm -P option with OpenBSD - if the file has a hardlink count test 4 test 5 Major kernel build infrastructure changes, part 1/2 (sys). Major kernel build infrastructure changes, part 2/2 (user). Misc cleanups and CVS surgery. Move a number of header and source files Remove system dependancies on <machine/ipl.h>. Only architecture files Move <machine/ccbque.h> to <sys/ccbque.h>. ccbque.h is not a Move <machine/dvcfg.h> to the one device that actually uses it, remove More Machine-dependant/Machine-independant code and header file separation. Move the Maxmem extern to systm.h Get the MI portions of VKERNEL to build, start linking it against libc. bmake uses /usr/share/mk/sys.mk, so we cannot require that MACHINE_CPU be Add a missing #undef to properly compile atomic.c functions into the Adjust for symbol name changes. buildworld depends on hw.machine exported from the kernel being correct. Enable the building of boot0cfg for pc32. Check for subdirectories for both the platform architecture and the unresolve the vnode associated with the namecache entry for a mount point Fictitious VM pages must remain structurally stable after free. Use spinlocks instead of tokens to interlock the objcache depot. Allow M_ZERO to be specified when using simple object cache setups which Replace the global VM page hash table with a per-VM-object RB tree. No Check an additional special pattern to detect dangerously dedicated mode. Add a manual page outlining the rules for committers. Make int bootverbose and int cold declarations machine independant. Misc vkernel work. Add a generic interrupt controller type that the virtual kernel build can use. Generate forwarding header files to mimic /usr/include -> /usr/include/sys Fix a NULL pointer dereference introduced in the previous commit. For the moment conditionally remove the declaration of certain libc Local variables that were improperly named 'errno' must be renamed so as Use ${.CURDIR} to get the correct path to the root skeleton directory. Document MADV_SETMAP and MAP_VPAGETABLE. These functions support virtualized Add another ICH PCI ID. Fix umct and add F5U409 USB Serial Adaptor. rename sscanf -> ksscanf Pass NULL to objcache_create() to indicate that null_ctor() and/or null_dtor() Repo copy machine/pc32/i386/mem.c to kern/kern_memio.c and separate out Add a prototype for the new mcontrol() system call. Rename kvprintf -> kvcprintf (call-back version) Add 'k' versions for printf, sprintf, and snprintf. kprintf, ksprintf, and Undo some renames that don't apply to the boot code (linked against libstand). Add IFF_MONITOR support. Rename sprintf -> ksprintf Remove unused procedures and declarations. Continue fleshing out VKERNEL support. Initialize the per-cpu globaldata Rename virtual_avail to virtual_start, so name follows function. Remove unused defines. Make a chunk of low level initialization code for proc0 and thread0 machine Make certain libc prototypes / system calls visible to kernel builds Introduce globals: KvaStart, KvaEnd, and KvaSize. Used by the kernel Make kernel_map, buffer_map, clean_map, exec_map, and pager_map direct Correct a conditional used to detect a panic situation. The index was off by zbootinit() was being called with too few pv_entry's on machines with small Fix compilation error when building without INET6. Fix a number of minor Fix manual page references to omshell. None of the patches in dhclient/client were being applied. Add the patches Remove an old debugging kprintf. Try to locate any instances where pmap_enter*() is called with a kernel Get most of the VKERNEL pmap handling code in. Remove pmap_kernel() (which just returned a pointer to kernel_pmap), and Move dumplo from MD to kern/kern_shutdown.c Continue fleshing out the VKERNEL. Rename system calls, removing a "sys_" prefix that turned out not to be Move uiomove_fromphyhs() source from MD to MI. Remove fuswintr() and suswintr(), they were never implemented and it was a Repo-move machine/pc32/i386/i386-gdbstub.c to cpu/i386/misc/i386-gdbstub.c. GDB stubs were only being compiled on systems with serial I/O installed, The last commit was incomplete, correct. Conditionalize all of the subdirectories in dev so we skip them for Conditionalize the existance of a gdt[] array for the in-kernel disassembler. Conditionalize the existance of a gdt[] array for the in-kernel disassembler. Rename errno to error to avoid conflict with errno.h Remove all physio_proc_*() calls. They were all NOPs anyhow and used VKERNEL work, deal with remaining undefined symbols. Remove the hack.So hack for virtual kernels. Remove old debugging printf. Use Maxmem instead of physmem. physmem is used only within pc32 Initialize thread0.td_gd prior to calling various gdinit functions, because Offset KernelPTD and KernelPTA so we can directly translate a kernel virtual If no memory image file is specified, locate or create one by running Add a new procedure, vm_fault_page(), which does all actions related to Assign proc0 a dummy frame to bootstrap vm_fork(). Fix a conflict with libc's killpg(). Fix compiler warnings Signal handlers usually inherit %gs. Make them inherit %fs as well. This Fix symbol conflict with falloc() define _KERNEL_VIRTUAL if not defined to hack-fix conflicts with normal Make libc prototypes available for kernel builds. Allow certain cpufunc.h inlines to be overridden by virtual kernel builds. Use itimers to implement the virtual kernel's SYSTIMER backend. Add a virtual disk device for virtual kernels to boot from. Add support for a root disk device file. The stack frame available from a signal to user mode stores the fault address Handle page faults within the virtual kernel process itself (what would be Set rootdevnames[0] to automatically boot from ufs:vd0a when a root disk Make the vmspace_*() system call prototypes available to (virtual) kernel Implement nearly all the remaining items required to allow the virtual kernel Modify the trapframe sigcontext, ucontext, etc. Add %gs to the trapframe The signal return code was trying to access user mode addresses When removing a page directory from a page map, the KVA representing Add missing header file. Make more libc prototypes available to _KERNEL builds. Make libc prototypes available to kernel builds Implement vm_fault_page_quick(), which will soon be replacing Rewrite vmapbuf() to use vm_fault_page_quick() instead of vm_fault_quick(). Use CBREAK mode for the console. Add the virtual kernel's virtual disk device to the fray (vd*). Rename the following special extended I/O system calls. Only libc, libc_r, Add a missing pmap_enter() in vm_fault_page(). If a write fault does a COW Disable terminal control characters while the virtual kernel is running, Use our interrupt infrastructure to handle the clock interrupt, but Fix two incorrect sigblock() calls. A virtual kernel running an emulated process context must pop back into A virtual kernel running another virtual kernel running an emulated process Name the virtual kernel disk device 'vkd' instead of 'vd'. Get floating point working in virtual kernels. Add a feature that allows Fix collision with variable named 'errno'. Have vectorctl silently succeed to remove a run-time warning. Tell the real kernel not to sync the file that backs a virtual kernel's Make the size of the pmap structure the same for both pc32 and vkernel Open the root disk with O_DIRECT. We do not want both the real kernel and Properly block SIGALARM and disable interrupts (i.e. signals) in init Fix a bug vm_fault_page(). PG_MAPPED was not getting set, causing the When removing a page directory, tell the real kernel to invalidate the Major pmap update. Note also that this commit temporarily nerfs performance Replace remaining uses of vm_fault_quick() with vm_fault_page_quick(). Remove unused SWI's. Add missing link options to export global symbols to the _DYNAMIC section, Fix a number of places where the kernel assumed it could directly access Misc cleanups. Adjust the gdb patch to account for the new register structure. Set kernel_vm_end to virtual_start instead of virtual_end so it can be used Add missing bzero() during low boot after malloc(). Add single-user mode boot option (-s). Fix the recently committed (and described) page writability nerf. The real Pull in a few bits from FreeBSD. Add a structure size field and floating Include the VN device by default. Allow VKERNEL builds to build certain non-hardware disk modules as well. The FP subsystem might not work properly when a vkernel is run inside Implement a new signal delivery mechanism, SA_MAILBOX. If specified the Link up the interrupt frame to the systimer API. Use PGEX_U to indicate Rename type to avoid conflict with 'kqueue' symbol. Add KQUEUE support to the TAP interface. Add O_ASYNC (SIGIO) support to kqueue(). Also add F_SETOWN support. Add kqueue based async I/O support to the virtual kernel. Convert VKE to Pass an interrupt frame to kqueue-based interrupts. Modify the console Close two holes in the pmap code. The page table self mapping does not cputimer_intr_reload() - prevent a negatively indexed or too-small a reload We want the virtual kernel to be default-secure. Disable writes to kernel Implement getcontext(), setcontext(), makecontext(), and swapcontext(). Add missing -I path when compiling assembly files. Rename /usr/src/sys/machine to /usr/src/sys/platform. Give the platform Fix license issue by removing -lreadline. The programs don't reference Remove the advertising clause from vinum with permission from Greg Lehey, Fix a crash related to the NPX (floating point) interrupt. The interrupt checkdirs() was being passed the wrong mount point, resulting in a panic Poor attempt to tracking the stack frame through a trap. Adjust for Stop init before ripping its filesystem references out in order to Fix the incorrect addition of a leading '/' in file paths in the journaling Setup for 1.8 release - bump HEAD to 1.7.1 and synchronize preview tag. Setup for 1.8 release - add new targets to Makefile to udpate the slip tag Setup for 1.8 release - Adjust HEAD to 1.9.0 Fix generation of the mount path for "/" when a process is chrooted into Remove gobsd from the list. Implement -D smbfs was not guarenteeing a NULL return vnode on error. This is required Add note on using 'handle SIGSEGV noprint' when gdb'ing a virtual kernel. Make sure all string buffers passed from userland are terminated before Initial syslink system call documentation and overview. Minor syntax cleanup Generate a warning if a wired page is encountered on a queue during a free Try to catch double-free bugs in the ACPI code. For now generate a warning Add syslink.2 to the install list. syslink work - Implement code for a reformulated system call, giving the Update the syslink documentation. A number of major design changes have Reformulate the syslink_msg structure a bit. Instead of trying to create Update the syslink documentation. This is still a work in progress. The Cleanup and reformulate some of the comments. Add IP_MINTTL socket option - used to set the minimum acceptable TTL a Add subr_alist.c. This is a bitmap allocator that works very similarly to Remove ldconfig_paths_aout, it is no longer used. Use SHUT_RD instead of a hardcoded value of 0 in calls to shutdown(). Kernel virtual memory must be mapped on a segment address boundary. Try Bring in the skeleton infrastructure and manual page for the new syslink Probably the last change to the syslink() system call. Allow a generic Don't allow snd_nxt to be set to a value less then snd_una when restoring Allocations of size greater then the radix were not returning the correct We have a few generation sets for Red-Black trees that implement RLOOKUP Clean up the so_pru_soreceive() API a bit to make it easier to read Make 'last mounted on' reporting by fsck work again. Add a new option Convert all pr_usrreqs structure initializations to the .name = data format. Just throw all the main arguments for syslink() into syslink_info and Give the sockbuf structure its own header file and supporting source file. Sync netstat up to the sockbuf changes. sbappendcontrol() was improperly setting sb_lastmbuf, creating a possible IPV6 type 0 route headers are a flawed design, insecure by default, and Move syslink_desc to sys/syslink_rpc.h so kernel code does not need Fix various paths in rc.d/diskless and friends. Add a generally accessible cpu_pause() inline for spin loops. Implement ncpus_fit and ncpus_fit_mask. Admittedly not the best name. When <sys/user.h> is included, it MUST be included first because it sets Add a shortcut function, objcache_create_mbacked(), which is more complex Implement SYSREF - structural reference counting, allocation, and sysid When <sys/user.h> is included, it MUST be included first because it sets Use the __boolean_t defined in machine/stdint.h instead of the Remove unneeded references to sys/syslink.h. Get syslink_desc from * Use SYSREF for vmspace structures. This replaces the vmspace structure's Store the frequency and cputimer used to initialize a periodic systimer. Revamp SYSINIT ordering. Relabel sysinit IDs (SI_* in sys/kernel.h) to EST's module was being installed before the module list got initialized, Reorder cpu interrupt enablement, do it in the code that drops It is possible for spurious interrupt(s) to be posted to an AP cpu Make the mountroot> prompt a bit more user-friendly. More cleanups, do not allow backspacing beyond the start of the line. Implement kern.do_async_attach. default disabled. To enable add Move clock registration from before SMP startup to after. APIC_IO builds Add missing crit_exit(); ata_boot_attach() is no longer used, #if 0 it out. Document the interrupt moderation timer and the fact that even though Reduce the livelock limit from 50Khz to 40Khz. When thread0 is initialized it must also be LWKT scheduled or LWKT will Add a new system call, lwp_rtprio(), and regenerate system calls. Make libthread_xu use the new lwp_rtprio() system call, mainly taken from Followup commit - fix a bug in the last commit. pci_get_resource_list() was returning an illegal pointer instead of NULL Update the vget, vput, vref, vrele, vhold, and vdrop documentation Update vnode.9, correct spelling. The bus_get_resource_list DEVMETHOD is primarily used to by PCI devices Use SYSREF to reference count struct vnode. v_usecount is now Update for vnode changes. Update vnodeinfo to handle the recent vnode changes. Changes to consdev - low level kernel console initialization. Add fields to the ktrace header to allow kdump to also display the TID Replace NOCDEV with NULL. NOCDEV was ((void *)-1) and as inherited Give the device major / minor numbers their own separate 32 bit fields Synchronize libkvm etc, with recent kernel major/minor device Fix a bug where multiple mounts on the same mount point cause the Fix the location of Make.tags.inc Add the ID for the Realtek ALC862 codec to the hda sound driver. Remove variables that are no longer used due to the previous commit. Fix a vnode recyclement namecache<->vnode deadlock introduced with recent Ooops, cache_inval_vp_nonblock() was being called too late, after the Make the kern.ipc.nmbclusters and kern.ipc.nmbufs sysctls read-only. Remove old unused cruft. Remove the ancient diskpart program. Start untangling the disklabel from various bits of code with the goal of Continue untangling the disklabel. Have most disk device drivers fill out Continue untangling the disklabel. Use the generic disk_info structure Continue untangling the disklabel. Reorganize struct partinfo and the Add a new command, /sbin/diskinfo, which uses the revamped DIOCGPART The normal ATA driver is capable of handling 48 bit block addressing, but * The diskslice abstraction now stores offsets/sizes as 64 bit quantities. Support 64 bit file sizes and 64 bit sector numbers. Continue untangling the disklabel. Add sector index reservation fields Port 16 byte SCSI command support from FreeBSD. This adds support for Add dev_drefs() - return the number of references on a cdev_t Remove the roll-your-own disklabel from CCD. Use the kernel disk manager Remove the roll-your-own disklabel from the ATA CD driver. Use the Remove support for mcd and scd - these were old proprietary ISA cdrom Remove libdisk from the build. Synchronize the NATA kernel build. Use DIOCGPART instead of DIOCGDINFO to remove references to the disklabel Remove #include <sys/disklabel.h> from various source files which no longer Remove libdisk from the Makefile. Add getdisktabbyname() to libc. This will soon replace getdiskbyname(). Fix buildworld, getdiskbyname() has moved to <disktab.h> Add back PCI_MAP_FIXUP, it was mistakenly removed. Remove the NATA config file generation rules. Add a rule to the check: use_mcd.h, use_scd.h no longer exist. Finish moving boot/i386 to boot/pc32 (it was left half done), remove Cleanup shutdown(2) usage and make it consistent. The implementation in rsh Update # comments and documentation for disktab(5). Continue untangling the disklabel. Use the WHOLE_DISK_SLICE instead of the compatibility slice to Implement raw extensions for WHOLE_DISK_SLICE device accesses for acd0. Temporary hack until we can get rid of the disklabel dependancies. Make savecore work wagain by using the new 64 bit dumplo (_dumplo64). Continue untangling the disklabel. Clean up dumpdev handling by having Keep the ds_skip_* fields in struct diskslice properly synchronized. Fix a one-character allocated string buffer overflow that was introduced Fix device recognition, /dev/vn0 now uses WHOLE_SLICE_PART, not partition 'c'. Include geometry data in DIOCGPART so fdisk can use it instead of trying Disklabel operations are no longer legal on the raw disk, use DIOCGPART to Quickly update UPDATING with 1.8 -> 1.9+ documentation. Cleanup diskerr() output a bit - don't say it was trying to write when When a traditional bsd disklabel is present, try to reserve SBSIZE bytes Remove all dependancies newfs had on accessing the disklabel. Use More disklabel disentanglement - use DIOCGPART instead of DIOCGDLABEL. Remove the fragment size, block size, and cpg fields from the disklabel Continue untangling the disklabel. Revert sc1 to testing getuid() like it says it does in the output. Use 0x%08x for all minor numbers, not just those > 255. Fix a bug in recent commits. When creating a virgin disk label for devices Handle disklabels with the disk management layer instead of rolling our own Remove DIOCWLABEL operation. Doing it destroyed the purpose of having Remove unused define. Simplify the lwkt_msg structure by removing two unused fields and a number * Greatly reduce the complexity of the LWKT messaging and port abstraction. The dump device must be opened before ioctls can be performed on it. Fix the kinfo run/sleep state for pure kernel threads. This effects /bin/ps LWKT message ports contain a number of function pointers which abstract Properly detect interruptable LWKT sleeps and display as 'S' instead of 'D'. Do an even better job discerning between blocked threads and blocked Add lwkt_sleep() to formalize a shortcut numerous bits of code have been Update documentation. Add a -c file option to the vkernel to specify CD images. The first -c or -r Add the kernel support function allocsysid(). The proper root device for a vkernel fs is vkd0s0a, not vkd0a. Syslink API work - greatly simplify the syslink_msg structure. Reimplement From within a virtual kernel, make /sbin/shutdown and /sbin/halt actually Open the root image O_EXLOCK|O_NONBLOCK and exit with an error message Add some syslink debugging programs. Remove unnecessary initialization and fix a GCC-4.x run-time linking issue Rename private to privdata and class to srclass to avoid conflicts with Adjust M_NOWAIT to M_WAITOK or M_INTWAIT as appropriate. Add flsl() for the NATA driver. Add flsl() for the NATA driver. Add a missing header file dependancy. Merge all the FreeBSD work done since our initial import of NATA, except Bring in 1.343 from FreeBSD. FreeBSD commit message: Synchronize to FreeBSD 1.35 - just adds a #define for ATA_SMART_CMD. Add a timings line for UDMA6 in two places. The drivers in question may or M_NOWAIT can only be used in a driver where a failed memory allocation is Implement boundary and maximum segment size handling in bus_dmamap_load(). Set the IDE DMA start bit as a separate I/O write from the DMA port Part 1/2: Add a sanity check to the NATA interrupt code to assert that Go to bed before the sun comes up. Catch up a bit with FreeBSD netgraph by replacing *LEN constants with Do any crash dump operation before the shutdown_post_sync event handler Add a field to the keyboard abstraction structure that allows the USB When compiling a kernel with all ktr logging (KTR_ALL), do not auto-enable Add polling support to BGE. Use the slab cache for PAGE_SIZE and PAGE_SIZE*2 sized allocations. This Add KTR logging for SMP page table syncing ops. Implement vm_fault_object_page(). This function returns a held VM page Entirely remove exec_map from the kernel. Use the new vm_fault_object_page() Get rid of some broken _KERNEL_VIRTUAL hacks. Remove unused variables after last commit. Remove the last source of SMP TLB invalidations in the critical code path Formalize the object sleep/wakeup code when waiting on a dead VM object and Increase the tsleep/wakeup hash table size and augment the KTR logging a bit. Add a new option to ktrdump (-l) which causes it to loop awaiting new data Change the -a option to be a bit friendlier. Have it print sorted, Create an upgrade target for MAKEDEV. This target will attempt to Remove the temporary NATA kernel build config file. Change GENERIC to This patch allows umct (USB<->RS232 adapter) to write to devices that do Move initialization of a vnode's various red-black trees to the CTOR function Dont poll PS/2 mouse interrupts, it can cause the mouse to get jumpy. Refuse to poll a detached sim. dev_dopen() can be called multiple times with only one dev_dclose() when Expand the diskslice->ds_openmask from 8 bits to 256 bits to cover all Do not destroy the device queue, it is needed by the peripheral code xpt_bus_deregister() never returns 0, don't test for it. When getnewvnode() is called the vnode's v_type defaults to VNON. Syncer Add vfs.nfs.pos_cache_timeout to complement vfs.nfs.neg_cache_timeout. Import the kernel GPT and UUID header files from FreeBSD, and bring in Bring uuidgen(3) into libc and implement the uuidgen() system call. Implement an opaque function, if_getanyethermac(), which retrieves MAC Regenerate system callsa (add uuidgen()). Update all sound code to use the snd_*() locking abstraction and sndlock_t. Correct mistake in last commit. Create the USB task queues before creating the event thread to avoid Bring the gpt labeling program in from FreeBSD. Backout the last commit, it's creating panics. Implement (non-bootable) GPT support. If a PMBR partition type is detected Add subr_diskgpt.c to the platform conf files. Add subr_diskgpt.c - oops. part of the GPT commit. Bring in the uuidgen utility from FreeBSD. Have UFS set the vnode type to VBAD instead of VNON so it gets cleaned Add two new UUID helper functions to libc, uuid_name_lookup() and Lines in /etc/uuids[.local] beginning with '#' are considered comments. Change the location of the files to /etc/defaults/uuids and /etc/uuids. Fix mistake in last commit, the file locations were not changed properly. Augment RB tree macros even more, allowing for static declarations, Adjust gpt to use the new UUID lookup functions via /etc/[defaults/]uuids. Create defaults/uuids and adjust the build to copy the file to /etc/defaults Make indexes start at 0, not 1, so they match the GPT partition numbers. Fix compiler warning (embedded /*) Fix an overflow in the GPT code, I wasn't allocating enough slice structures. Disable per-channel interrupt sources before enabling the master interrupt Implement SIGTERM handling. When a SIGTERM is received by a VKERNEL, it More syslink messaging work. Now basically done except for the I/O 'DMA' Disklabel separation work - Generally shift all disklabel-specific Disklabel separation work - more. Move all the code related to handling the current 32 bit disklabel Improve the error message for gpt add a little. Implement non-booting support for the DragonFly 64 bit disklabel: Make some adjustments to clean up structural field names. Add type and Correct a couple of uuid retention issues. Output the storage uuid for Rename d_obj_uuid to d_stor_uuid to conform to the naming convention being Add the -p pidfile option to the vkernel. Refuse to label media that is too large to handle a 32 bit disklabel The fstype was not being properly tested for a CCD uuid. Do not blindly allow the block count to overflow. Restrict newfs filesystem Correct a bug in the -S truncation mode where the mode was not being passed Fix an issue with positive namecache timeouts. Locked children often Fix rts_input() which is the only procedure which calls raw_input(). As Recode the state machine to make it a bit less confusing. Collapse the Adjust debug output so columns line up better. Create a default dntpd.conf file for DragonFly using three pool.ntp.org Add a new option (-i) that allows the insane deviation value to be set, and Repo-copy numerous files from sys/emulation/posix4 to sys/sys and sys/kern A file descriptor of -1 is legal when accessing journal status. Just allow Implement jscan -o. Take the patch from Steve and add some additional Fix a bug-a-boo, the type uuid was being printed instead of the storage Clarify cpu localization requirements when using callout_stop() and Get out-of-band DMA buffers working for user<->user syslinks. This Add a new flag, XIOF_VMLINEAR, which requires that the buffer being mapped Add O_MAPONREAD (not yet implemented). This will have the semantics of Clean up syslink a bit and add an abstraction that will eventually allow This is a simple little syslink test program which ping-pongs a 64K Implement struct lwp->lwp_vmspace. Leave p_vmspace intact. This allows Flag the checkpoint descriptor so on restore we can identify it and use the Add MLINKS for checkpoint.1, because most point looking for information Update the documentation for sys_checkpoint(). Move the P_WEXIT check from lwpsignal() to kern_kill(). That is, disallow Try to avoid accidental foot shooting by not allowing a virtual kernel A signal is sent to a particular LWP must be delivered to that LWP and never More multi-threaded support for virtualization. Move the save context Bring in all of Joe Talbott's SMP virtual kernel work to date, which makes Conditionalize SMP bits for non-SMP builds. Use dladdr() to obtain symbol names when possible and try to dump the Also credit lots of help from Aggelos Economopoulos <aoiko@cc.ece.ntua.gr> Do not allow umtx_sleep() to restart on a restartable signal. We want to Implement an architecture function cpu_mplock_contested() which is Clean up a kprintf() that was missing a newline. Only use the symbol returned by dladdr() if its address is <= the Copy a junk file from pc32 needed for <time.h> sched_ithd() must be called from within a critical section. The kernel perfmon support (options PERFMON) was trying to initialize its The real-kernel madvise and mcontrol system calls handle SMP interactions Add an option (-n ncpus) to specify the number of cpus a virtual kernel Increase SMP_MAXCPU to 31. Can't do 32 (boo hoo!) because spinlocks need sigwinch has to run with the big giant lock so use the DragonFly Exhaust the virtual kernel network interface even if we cannot allocate Put a timeout on the umtx_sleep() in the idle loop and add conditional Because the objcache caches up to two magazines on each cpu some pretty Make the virtual kernel's systimer work with SMP builds. Have it Give virtual kernels access to sysctl() prototypes and clean up warnings. A virtual kernel can cause a vm_page's hold_count to exceed 32768. Make Implement an architecture call for contended spinlocks so the vkernel can The vkernel's copyin/copyout implementation is not MP safe, acquire and Add usched_mastermask - a master cpu mask specifying which cpus user If more then 2 virtual cpus are present, dedicate one to handle I/O Fix a number of races in the controlling terminal open/close code. Fix an issue which arises with the TAP interface when the highly Add a section on how to build the world inside a virtual kernel. Try to catch double-replies a little earlier so we get a more meaningful Fix an insufficient test of the message flags when determining whether Update the CCD and disklabel documentation to reflect changes in Do not synchronously waitmsg in the unix domain socket's custom putport Use I/O size limits in underlying devices to govern I/O chunk Add SHUTDOWN_PRI_DRIVER and move all driver shutdown functions from Add an sc_maxiosize field which the ccd driver now needs. Fix LWP support on exec. exec now properly kills all LWPs. Be more verbose in the bad-opencount assertion. Synchronous libarchive to 2.2.4 from FreeBSD, including fixes related to Merge from vendor branch LIBARCHIVE: Synchronous libarchive to 2.2.4 from FreeBSD, including fixes related to Update to include info on last update. Reparse device specifications. The slice is no longer optional. Clean up the ioctl switch and add support for DIOCGPART which is Rename the new cvsup bootstrap kit so make nrelease knows a new one Temporarily reenable writing to the label area for backwards compatibility. Update the installer to dfuibe_installer-1.1.7nb1. Add logic to remove Be a little more verbose when reporting unmount errors. Remove the requirement that calls to vn_strategy() be limited to the Temporarily hack around an interrupt race against device detach by Properly initialize next_cpu to 0 when '-l map' is used for the default The disk layer must not inherit the D_TRACKCLOSE flag from the underlying Do not loop forever doing 0-sized I/Os if si_iosize_max is 0. Instead When CAM creates the disk layer, set the underlying raw device's Incorporate the device DMA limitations into the request transfer size Add assertions for 0-length transfers and panic if one is attempted. SCSI CD devices do not support raw mode accesses (yet). This fixes dssetmask() was being called too early, causing the disk layer to believe Remove a duplicate SUBDIR += Clarify two usage cases for umtx.2 Synchronize the syslink manual page with the system call. Minor checkpt usage() patch. Bump DragonFly_version and create a subvers-DEVELOPMENT file for HEAD for Add logic to allow the first hog partition to specify a base offset. This release engineering: Add a slip tag for 1.10 and add an example cvsup Release engineering: Update version information in HEAD to reflect 1.11 Update to 9.3.4-P1 with key id generation security fix. Merge from vendor branch BIND: Turn syscall_mpsafe and trap_mpsafe on by default. This causes system calls Include a virtual kernel in the release along with GENERIC. Add an ordering field to the interrupt config hook structure and adjust Fix missing history->hist adjustments from libreadline->libedit commit. Add infrastructure to locate a disk device by name by scanning the disk When an inode collision occurs a partially initialized vnode will be Introduce krateprintf(), a rate-controlled kprintf(), and the related Rate-limit residual I/O warnings from the VM system that occur when a Allow the compatibility slice (s0) to be specified. vrecycle() is typically called from a VFS's inactive function, which Detect the case where rename()ing over a file that has been unlinked but Fix vinum. Vinum illegally uses device ops such as dev_dopen(), Fix a coding mistake when dequeueing memory disk BIOs. I missed this file. We are running 9.3.4-P1 now, not 9.3.4. The LWP run/sleep state does not always match the actual state of the The distribution installs a Makefile in /usr with easy-to-use targets to Breakout a target for preview as well, and use the slip tag for the Update some packages supplied by corecode. The new bootstrap is a binary Do not require AHC_PCI_CONFIG to be defined. Adjust the installer to properly install /usr/Makefile. Fix build errors when MSDOSFS_DEBUG is defined. Fix a mbuf leak that was introduced in April. In April I made a change Add the MBUF_DEBUG option. This is a fairly invasive option that should Explicitly set a large receive buffer for datagram sockets to give syslog nfe_init() can be called as a side effect of certain ioctl operations with Introduce two delays in nfe_stop(). Add support for a new revision of the RealTek 8168B/8111B called SPIN3. The 1's complement checksum in m->m_pkthdr.csum_data was not being properly It is possible to catch a LWP while it is being created or destroyed, Port FreeBSD/pf_norm.c 1.18 - fix 1's complement carry for csum_data when Oops. Correct attribution for the last commit - 1's complement csum_data Add another fix to the 1's complement checksum. A second carry does not Make m_mclfree() MP safe by fixing a N->0 dereferencing race. The spinlock The cvs checkout commands were improperly specified the -d option. Change the ordering of the zombie test for ^T to avoid a NULL pointer Fix the fstab installation example. vkd0a -> vkd0s0a. Explicitly extract the sector size for the underlying media. This solves Replace the huge mess that was vnode_pager_generic_getpages() with a much Add xio_init_pages(), which builds an XIO based on an array of vm_page_t's. Remove the vpp (returned underlying device vnode) argument from VOP_BMAP(). The new VOP_N*() (namespace) operations pass a pointer to a namecache Add additional functionality to the syslink implementation. Give the Part 1/many USERFS support. Bring in the initial userfs infrastructure. Fix for amd geode cs5536 companion (ehci) panic. Also fix a word-reversed Fix pci bus detection on certain motherboards. Fixes bus detect on Add '-H', 'nlwp', and 'tid' options to ps(1) to display some LWP data (inspired Signals have to be blocked when creating our LWPs or a LWP may receive a Convert the lwp list into a red-black tree. This greatly reduces the Clean up the kvm process code. This is only used when trying to get a Bring CARP into the tree. CARP = Common Address Redundancy Protocol, which Deprecate 'installer_*' targets. If used a warning is generated and the Fix the root device selection to match the manual page. Before it was always Fix another ^T race related to catching a LWP just as it is being created, Add a prototype and wrapper for lockuninit() to complement spin_uninit(). getpages/putpages fixup part 1 - Add support for UIO_NOCOPY VOP_WRITEs to Do not try to dump UIO_NOCOPY writes to the journal. There's nothing Temporary hack up a fix for msdos which avoids a deadlock between the VM Add missing xfer->state assignment. Add missing if_softc assignment, allowing pppX interfaces to Add geode companion support to ohci probe code part2 ppX fix for altq/pf. Add T2300 cpu support to EST. Fix a bugaboo in the last commit. Pages are hard-busied for getpages, Fix a misleading error message when the device specified at the mountroot Add vop_stdgetpages() and vop_stdputpages() and replace those filesystem Fix a bug in vnode_pager_generic_getpages(). This function was improperly Add a MNTK_ flag to the mount structure allowing a VFS to specify that Bring in FreeBSD/1.177 - fix a bug in a call made to useracc(). This Force an over-the-wire transaction when resolving the root of an NFS mount Change the virtual kernel's default hz to 20, because the kqueue timers we Set si_iosize_max to silence run-time warnings. Remove a bogus assertion. in_vm86call may have been set by some unrelated General userfs fleshing out work. Abstract out construction and kern_access() had the same bug kern_stat() had with regards to a Fix bugs in the handling of CIDR specifications such as 'route add 128/8 Fix a bootstrapping issue with the change of the default gcc to 4.1. alist debug mode - report unrounded byte usage. Interrupt thread preemption was switching in threads with held tokens Add a note that dntpd can be started at boot time even if the network Add an install location for the vkernel binary, add a second on how to Add a uuid for the "DragonFly HAMMER" filesystem type. libiconv was declaring a base kobj_class structure instead of an extended Remove the /usr/lib/crt* files. These files have been located in Allow the crc32.c module to be used in userland or kernel code. Indicate that alist_free() calls do not have to supply power-of-2 aligned Add a typedef for uuid_t for kernel compiles. One already existed for Adjust RB_PROTOTYPEX to match RB_GENERATE_XLOOKUP. These declare a Red-Black Add syslink_vfs.h for userfs, defining the syslink element infrastructure Initial commit of mount_hammer - basic working skeleton for testing. Initial commit of newfs_hammer - basic working skeleton for testing. Primary header file infrastructure and A-list implementation for the Adjust the description of HAMMER's storage limitations. I have rearranged Clean up the structural organization. Separate out A-lists and make Give the A-list code the ability to do a forward or reverse allocation Add volume, super-cluster, cluster, and buffer abstractions to provide Fix a race between exit and kinfo_proc. proc->p_pgrp and the related Fix more NULL structural dereferences that can occur when a process is in Reactivate a vnode after associated it with deadfs after a forced unmount. HAMMER part 1/many. This is a clear-my-plate commit. Add a HAMMER kernel build option, add a VFS type for HAMMER, add a file Synchronize newfs_hammer with recent changes. Correct a bug in the lockf code. F_NOEND was not being properly set. Properly set the buf_type in the volume, super-cluster, and cluster headers. A delete_tid of 0 indicates a record which has not yet been deleted and HAMMER 2/many - core mount and unmount code now works, the B-Tree search Add a PHOLD/PRELE sequence around a sysctl_out to fix a race against Modify struct vattr: Break-out the standard UNIX uid/gid tests for VOP_ACCESS into a helper file. Convert the global 'bioops' into per-mount bio_ops. For now we also have When the quotacheck has not been run the quota code may have to Add regetblk() - reacquire a buffer lock. The buffer must be B_LOCKED or Silence an annoying compiler warning. HAMMER part 2/many. Add bio_ops->io_checkread and io_checkwrite - a read and write pre-check Correct bug in last commit. Remove i386 support. Separate ssb_lock() and ssb_unlock() into its own header file and reimplement HAMMER 3/many - more core infrastructure. HAMMER 4/many - more core infrastructure Update the documentation for getdirentries(2). Describe issues with using Add a helper function vop_helper_setattr_flags() modeled off of UFS's Adjust getdirentries() to allow basep to be NULL. Use off_t for the loff Make necessary changes to readdir/getdirentries to support HAMMER. HAMMER Clean up some missing 32->64 bit cookie conversions. Adjust the NFS server Add vop_helper_create_uid() - roughly taken from UFS. Figure out special Break out the scan info structure's support routines so external code HAMMER 5/many - in-memory cache and more vnops. Fix loc_seek - using lseek to acquire the directory cookie. Replace the very predictable 'random' IP sequence number generator with Remove debugging printfs. Drop into DDB if the vkernel hits a floting point exception (SIGFPE). Catch vkernel divide-by-0 traps a bit earlier so they are reported properly. HAMMER 6/many - memory->disk flush, single-cluster sync to disk, more vnops. HAMMER 7/many - deletions, overwrites, B-Tree work. Initialize idx_ldata - a forward iterator for allocating large (16K) data Make fixes to the A-list initialization and properly allocate records HAMMER 8/many - A-list, B-Tree fixes. As-of queries HAMMER 9/many - btree removal cases, mount nohistory Bring the getent(1) program in from FreeBSD and link it into the build. Bring in Matthias Schmidt's nice little pkg_search script. Add clarifying comments for LWP_WSTOP and LWP_WEXIT. Fix a 'panic: vm_page_cache: caching a dirty page' assertion. Even though Add missing sys/proc.h Fix krateprintf(). The frequency was improperly being multiplied by hz Update pkg_search to download and use the pkg_summary file, allowing Synchronize most of the remaining FreeBSD changes for Elf64 typedefs. Install hammer includes in /usr/include/vfs/hammer. Fix bug in as-of mount date specification. Save and restore the FP context in the signal stack frame. HAMMER 10/many - synchronize miscellaneous work. Properly set the mc_fpformat field in the ucontext so libc_r knows which FP Use the mc_fpformat field to determine the correct FP save/restore FP registers are now saved and restored by the kernel, remove the HAMMER 11/many - initial spike commit. HAMMER 12/many - add VOPs for symlinks, device, and fifo support. HAMMER 12/many - buffer cache sync, buffer cache interactions, misc fixes. HAMMER 13/many - Stabilization commit HAMMER 13B/many - addendum to 13. Add the 'hammer' utility. This is going to be a catch-all for various HAMMER 14/many - historical access cleanup, itimes, bug fixes. UFS vnodes must have VM objects before they can be truncated. This is Back out the last commit, it asserts in the getblk code due to the vnode HAMMER 15/many - user utility infrastructure, refactor alists, misc fill_kinfo_proc() may be asked to load information on a zombied process, HAMMER 16/many - Recovery infrastructure, misc bug fixes HAMMER 16B/many: Fix data overwrite case. Fix buffer cache deadlocks by splitting dirty buffers into two categories: HAMMER 17/many: Refactor IO backend, clean up buffer cache deadlocks. HAMMER 18/many: Stabilization pass HAMMER 18B/many: Stabilization pass Attempt to fix an interrupt recursion which can occur in specific HAMMER 19/Many - Cleanup pass, cluster recovery code. HAMMER 20A/many: B-Tree lookup API cleanup, B-Tree changes. HAMMER utilities: HAMMER 20B/many: New spike topology, simplify the B-Tree code. HAMMER utilities: HAMMER 21/many: B-Tree node locking finalization. Add hammer_recover.c for kernel builds w/ HAMMER. Fix an issue with cache_rename(). This procedure previously copied a Fix time conversion bugs in the stamp command. HAMMER 22/many: Recovery and B-Tree work. HAMMER utilities: HAMMER 23/many: Recovery, B-Tree, spike, I/O work. HAMMER utilities: Features and sync with VFS. HAMMER 24/many: Clean up edge cases HAMMER utilities: Add a verbose (-v) option. HAMMER 25/many: get fsx (filesystem test) working, cleanup pass HAMMER 24B/many: Edge cases, cleanups HAMMER utilities: synchronize newfs_hammer. Conditionalize the illegal MXCSR tests on SSE support. Machines that did Address a potential weakness in IBAA. The generator needs to be warmed up Make sure scb->lastfound is NULLed out when it matches the entry being HAMMER 25/many: Add an ioctl API for HAMMER. HAMMER utilities: Add the 'prune' and 'history' commands. Add a conditional so we don't have to drag in everything when a user HAMMER 25/many: Pruning code * Implement a mountctl() op for setting export control on a filesystem. Fix a compiler warning. Implement NFS support and export control for HAMMER. HAMMER 26/many: More NFS support work, rename fixes Fix some NFS related bugs which cause the mount point's mnt_refs counter HAMMER 26/many: Misc features. HAMMER Utilities: Add an 'everything' directive to the prune command. This HAMMER 27/many: Major surgery - change allocation model HAMMER 28/many: Implement zoned blockmap HAMMER 28A/many: Translation and write performance optimizations Make the Brother HL1240 printer work with ulpt. Adjust nrelease to a new package set. Bump to 1.11.1 prior to 1.12 branch and update the preview tag. Release engineering, Add a slip target to /usr/src/Makefile for 1.12 and Release Engineering on HEAD. Oops, drop head's version back one for head (it was set to the release's Fix an issue where the random number generator's random event injector Fix mount_nfs to allow hostnames which begin with a digit. Fix a use-after-free bug in the envelope code just after a port 25 fork. HAMMER 29/many: Work on the blockmap, implement the freemap. Require the the core file be owned by the user. Please also see the HAMMER 30/many: blockmap work. HAMMER 30A/many: blockmap cleanup HAMMER 30B/many: Minor bug fix. HAMMER 30C/many: Fix more TID synchronization issues HAMMER 31A/many: File data size optimization HAMMER 31B/many: Fix busy block dev on HAMMER umount HAMMER 31C/many: Fix livelock in deadlock handling code Clean up the token code and implement lwkt_token_is_stale(). Users of Patch additional use-after-free cases. HAMMER 32/many: Record holes, initial undo API, initial reblocking code HAMMER utilities: Add the reblock command, adjust newfs_hammer. HAMMER 32B/many: Reblocking work. HAMMER 33/many: Expand transaction processing, fix bug in B-Tree HAMMER 33B/many: Further B-Tree fix. Improve vkernel support. A threaded process going into SZOMB may still have active threads which are HAMMER utilities: feature add. Remove calls to pmap_clear_modify() in the swap_pager, fixing a kernel panic. HAMMER 33C/many: features and bug fixes. We must hold the lwp we are trying to kill to prevent it from being HAMMER 34/many: Stabilization pass Miscellanious features and adjustments to cpdup. Bump cpdup's version to 1.08. Add a FS_HAMMER id for disklabel. Fix collision in conf/files, add hammer_signal.c. Add mount_hammer, newfs_hammer, and the hammer utility to the build. HAMMER 34B/many: Stabilization pass. Synchronize various changes from FreeBSD. This is not exhaustive but gets Rename PCIP_STORAGE_SATA_AHCI to PCIP_STORAGE_SATA_AHCI_1_0 HAMMER 35/many: Stabilization pass, cleanups HAMMER utilities: automatic sync/sleep HAMMER utilities: Add -lm for double arithmatic. HAMMER 35B/many: Stabilization pass, cleanups. HAMMER 35C/many: Stabilization pass. Make sure there is no possibility of a cothread trying to access the HAMMER 36/many: Stabilization pass. It's frankly long past time that we turn net.inet.tcp.always_keepalive HAMMER 36B/many: Misc debugging. Fix a snafu with the last commit. Not all of the new AHCI detection support Add fairq to altq. Fairq is a fair queueing algorithm with bandwidth Bring the 'probability' keyword into PF from NetBSD. This feature allows MFC 1.33/pf.c from NetBSD. Don't apply a window scale to the window Add parallel transaction support for remote source or target specifications. More cpdup work. Properly detach children so we dont have to pthread_join() them. Fixes Properly mark a transaction has being completed so the slave side of Update PORTING instructions for linux to support pthreads. Implement a number of major new features to PF. pfsync_state doesn't have or need a hash field, the state will be hashed Fix multiple issues with -p<parallel>, including several data corruption Fix ktrace for threaded processes. Move the KTRFAC_ACTIVE flag to the LWP Don't free held clean pages when asked to clean. Minor optimization to LIST_FOREACH_MUTABLE taken from FreeBSD. Fix a bug in umtx_sleep(). This function sleeps on the mutex's physical Fix some issues in libthread_xu's condvar implementation. Update the documentation for umtx_sleep() and umtx_wakeup(). Add __sreadahead() to help with pkgsrc's devel/m4. Remove debugging assertion. Refuse to talk with the remote cpdup if it's version is not compatible. Finish up cpdup. Bump the protocol version to 2 and refuse to talk to Add an interlock for certain usb task operations. Dive the scheduler to implement the yield function. For the moment it just Pass the current LWP to sigexit() instead of the current process. Remove Fix some IO sequencing performance issues and reformulate the strategy HAMMER 37/Many: Add a flush helper thread, clean up some inconsistencies. Fix a free() race due to a misplaced mutex unlock. * Remove the SINGLEUSE feature for telldir(), it does not conform to the Fix two A-list corruption cases. Fix panics which can occur when killing a threaded program. lwp_exit() HAMMER 38A/Many: Undo/Synchronization and crash recovery HAMMER 38B/Many: Undo/Synchronization and crash recovery HAMMER 38C/Many: Undo/Synchronization and crash recovery HAMMER 38D/Many: Undo/Synchronization and crash recovery HAMMER 38E/Many: Undo/Synchronization and crash recovery HAMMER 38E/Many: Undo/Synchronization and crash recovery HAMMER utilities: Misc documentation and new options. HAMMER 38E/Many: Undo/Synchronization and crash recovery HAMMER 38F/Many: Undo/Synchronization and crash recovery, stabilization pass Fix some pmap races in pc32 and vkernel, and other vkernel issues. Minor code reordering and documentation adjustments. Fix a NULL poiner dereference in the statistics collecting code as KTR_TESTLOG is a valid kernel option (it enables the KTR ipi performance Paging and swapping system fixes. HAMMER 39/Many: Parallel operations optimizations HAMMER Utilities: zone limit HAMMER 39B/Many: Cleanup pass The driver was improperly using kmem_free() instead of pmap_unmapdev(), Add some assertions when a buffer is reused Change the SMP wakeup() code to send an IPI to the target cpu's in parallel Cothreads do not have a globaldata context and cannot handle signals Add pmap_unmapdev() calls to detach functions for drivers which used Have vfsync() call buf_checkwrite() on buffers with bioops to determine HAMMER 40A/Many: Inode/link-count sequencer. HAMMER 40B/Many: Inode/link-count sequencer cleanup pass. HAMMER 40C/Many: Inode/link-count sequencer cleanup pass. Print the path even if we do not understand the filesystem type. HAMMER 40D/Many: Inode/link-count sequencer cleanup pass. HAMMER 40E/Many: Inode/link-count sequencer cleanup pass. HAMMER 40F/Many: Inode/link-count sequencer cleanup pass, UNDO cache. Correct a bug in seekdir/readdir which could cause the directory entry Print the 64 bit inode as a 64 bit quantity rather then a 32 bit quantity. The direct-write pipe code has a bug in it somewhere when the system is HAMMER 40F/Many: UNDO cleanup & stabilization. HAMMER Utilities: enhanced show, timeout option HAMMER 40G/Many: UNDO cleanup & stabilization. HAMMER 41/Many: Implement CRC checking (WARNING: On-media structures changed) HAMMER Utilities: Feature add Only call bwillwrite() for regular file write()s, instead of for all write()s. Keep track of the number of buffers undgoing IO, and include that number HAMMER Utilities: Sync with recent changes. HAMMER 41B/Many: Cleanup. Remove the SMP_MAXCPU override for vkernels, causing the build to revert Enable kern.trap_mpsafe and kern.syscall_mpsafe by default for vkernels. Fix a sizeof() the wrong variable name. The correct variable was the same Correct comments and minor variable naming and sysctl issues. Bump base development version to 197700 so it is properly distinct from Clear the direction flag (CLD) on entry to the kernel, to support future Recode the resource limit core (struct plimit) to fix a few races and Fix some lock ordering issues in the pipe code. Fix a race between the namecache and the vnode recycler. A vnode cannot be Fix a nasty memory corruption issue which can occur due to the kernel bcopy's Fix fork/vfork statistics. forks and vforks were being improperly counted Fix many bugs and issues in the VM system, particularly related to HAMMER 42/Many: Cleanup. Sync sysperf with some random stuff, and add a cld instruction tester. Return EINVAL if a NULL pointer is passed to the mutex routines, instead Fix a HAMMER assertion which turned out to be a bug in VOP_N*(). Sometimes HAMMER 42A/Many: Stabilization. Fix feature logic so changing kern.pipe.dwrite_enable on the fly works Finish moving the kernel from tsc_freq (32 bits) to tsc_frequency (64 bits). HAMMER Utilities: Feature add HAMMER 42B/Many: Stabilization. HAMMER 42C/Many: Stabilization. HAMMER Utilities: scan feedback Fix UP real kernel crash, a vkernel define was being used iproperly HAMMER Utilities: Features HAMMER 42D/Many: Stabilization. HAMMER 42E/Many: Cleanup. HAMMER Utilities: Cleanup. HAMMER 43/Many: Remove records from the media format, plus other stuff HAMMER 43A/Many: Cleanup, bug fixes. HAMMER 43B/Many: Correct delete-on-disk record bug. HAMMER Utilities: Misc features and adjustments. HAMMER 43C/Many: Performance cleanup HAMMER 44/Many: Stabilization pass, user-guaranteed transaction ids HAMMER Utilities: Feature add HAMMER Utilities: Features HAMMER 45/Many: Stabilization pass, undo sequencing. Fix str[c]spn off by one error. The char dummy must be dummy[2] to accomodate a nul terminator when dealing HAMMER Utilities: Stabilization pass. Add a sysctl jail.allow_raw_sockets (default to diabled) which allows Syntax cleanup and also commit a missing piece of the jail_allow_raw_sockets Change cluster_read() to not block on read-ahead buffers it is unable to HAMMER 46/Many: Performance pass, media changes, bug fixes. HAMMER Utilities: Update for HAMMER changes. Fix a number of core kernel issues related to HAMMER operation. HAMMER 46B/Many: Stabilization pass Fix an overflow in the time calculation. Add a DELAY(500) during the register init phase to give the device some time Properly track the write-open count when updating a msdos mount from RW to RO, Add a define for IEEE80211_FC1_PROTECTED. Bring in fixes for a bug which occurs when the filesystem become fulls. HAMMER 47/Many: Stabilization pass Fix a pipelining performance issue due to the way reading from the socket Calls to DIOCSYNCSLICEINFO were being made with the assumption that Use a per-bucket mutex to reduce contention and fix a seg-fault from a Bump version to 1.11. Fix a very old bug where the root mount was not getting a filesystem syncer Add vop_helper_chmod() and vop_helper_chown(). These helper functions HAMMER 48/Many: finish vop_setattr support, ncreate/nmknod/etc, minor bug fixes. * Implement SOCK_SEQPACKET sockets for local communications. These sockets Create a new daemon called vknetd. This daemon uses the new SOCK_SEQPACKET Get rid of an old and terrible hack. Local stream sockets enqueue packets Add vknetd to the build. Do not try to set-up the bridge or tap interfaces when connecting to Add the notty utility, a program I wrote long ago which I should have Fix socketvar.h inclusion by userland. This is a temporary hack and, Only test the IP protocol (ip_p) for IP frames. Implement a new utility called vknet. This utility interconnects the Generate a semi-random MAC address when connecting to a SOCK_SEQPACKET HAMMER 49/Many: Enhance pruning code HAMMER Utilities: Add the 'hammer softprune' command. HAMMER 49B/Many: Stabilization pass HAMMER Utilities: Cleanup HAMMER Utilities: New utility 'undo'. * Implement new system calls in the kernel: statvfs(), fstatvfs(), Implement a new system call: getvfsstat(). This system call returns Clean up statvfs() and related prototypes. Place the prototypes in the More header file cleanups related statvfs. Add getmntvinfo() which uses the new getvfsstat() system call. Use newly available libc and system calls related to statvfs to make df HAMMER Utilities: Performance adjustments, bug fixes. HAMMER 50/Many: VFS_STATVFS() support, stabilization. Even using the objcache we need a one-per-cpu free-thread cache in order Fix kernel compile warnings. HAMMER Utilities: Correct vol0_stat_freebigblocks. Add missing exit(1). Disallow negative seek positions for regular files, directories, and Add the UF_NOHISTORY and SF_NOHISTORY chflags flags. The nohistory flag Report the nohistory, noshistory, and nouhistory flags, and allow them HAMMER 51/Many: Filesystem full casework, nohistory flag. HAMMER Utilities: More pre-formatting, cleanup Do not update f_offset on EINVAL. HAMMER Utilities: Enhance mount_hammer HAMMER 52/Many: Read-only mounts and mount upgrades/downgrades. HAMMER 53A/Many: Read and write performance enhancements, etc. HAMMER Utilities: Critical bug in newfs_hammer HAMMER 53B/Many: Complete overhaul of strategy code, reservations, etc HAMMER 53C/Many: Stabilization Fix a SMP race in signotify_remote(). LWPHOLD() the lwp being Add an extern for hidirtybuffers. HAMMER 53D/Many: Stabilization Change bwillwrite() to smooth out performance under heavy loads. Blocking HAMMER 53E/Many: Performance tuning HAMMER 53F/Many: Fix deadlock. HAMMER 53G/Many: Performance tuning. Switch from bioq_insert_tail() to bioqdisksort(). When the kernel is HAMMER 53H/Many: Performance tuning, bug fixes HAMMER 54/Many: Performance tuning HAMMER 54B/Many: Performance tuning. HAMMER 54C/Many: Performing tuning, bug fixes Add missing LWPHOLD/LWPRELE in kinfo code. Reimplement B_AGE. Have it cycle the buffer in the queue twice instead of LWPHOLD/LWPRELE must be atomic ops because an IPI can call LWPRELE. HAMMER 54D/Many: Performance tuning. HAMMER 55: Performance tuning and bug fixes - MEDIA STRUCTURES CHANGED! HAMMER Utilities: Sync with commit 55 - MEDIA STRUCTURES CHANGED! Change the namecache lock warning delay from 1 to 5 seconds. We must process incoming IPI messages when spinning in the thread HAMMER 56A/Many: Performance tuning - MEDIA STRUCTURES CHANGED! HAMMER Utilities: sync with 56A HAMMER 56B/Many: Performance tuning - MEDIA STRUCTURES CHANGED! HAMMER Utilities: Sync with 56B Miscellanious performance adjustments to the kernel HAMMER Utilities: Bug fixes HAMMER 56C/Many: Performance tuning - MEDIA STRUCTURES CHANGED! Fix a bug in cluster_read(). An error returned by the call to HAMMER Utilities: Sync with 56D HAMMER 56D/Many: Media structure finalization, atime/mtime, etc. HAMMER 56E/Many: Correct bug in 56D HAMMER 56F/Many: Stabilization pass HAMMER 57/Many: Pseudofs support HAMMER Utilities: Add the 'pseudofs' directive for commit 57 Support S_IFDIR mknod() calls for HAMMER. This is used by the Hammer HAMMER Utilities: Sync with 58A HAMMER 58A/Many: Mirroring support part 1 HAMMER Utilities: Remove time/transaction-id conversion directives. HAMMER 58B/Many: Revamp ioctls, add non-monotonic timestamps, mirroring HAMMER Utilities: Sync to 58B HAMMER 59A/Many: Mirroring related work (and one bug fix). HAMMER Utilities: Sync with 59A HAMMER Utilities: Add "slave" option to hammer_mount. Make sure UFS disallows mknod()'s with type VDIR. Add KTR_HAMMER Vendor import of netgraph from FreeBSD-current 20080626 Merge from vendor branch NETGRAPH: Netgraph port from FreeBSD - initial porting work Add additional atomic ops from FreeBSD. Add files and options lines for NETGRAPH7 Apply patch supplied in FreeBSD-PR to ata-raid code: dummy_thr does not have to be committed and pthread_t might not even Increase the default request timeout from 5 seconds to 10 seconds. HAMMER 59B/Many: Stabilization pass - fixes for large file issues Fix a system performance issue created by ata_sort_queue(). This function Bump the sortq_lost check from 8 to 128, letting the disk optimally read or Replace the bwillwrite() subsystem to make it more fair to processes. HAMMER 59C/Many: Stabilization pass - fixes for large file issues Fix hopefully all possible deadlocks that can occur when mixed block sizes HAMMER 59D/Many: Sync with buffer cache changes in HEAD. HAMMER 59E/Many: Stabilization pass - fixes for large file issues Adjust comments. Fix an issue where CAM would attempt to illegally get a lockmgr() lock Fix a NULL pointer dereference when a DDB 'ps' attempts to HAMMER 59D/Many: Stabilization pass Fix a buf_daemon performance issue when running on machines with small HAMMER 59E/Many: Stabilization pass Add a new helper function, kmalloc_limit(). This function returns Fix a low-memory deadlock in the VM system which can occur on systems HAMMER 59F/Many: Stabilization pass Fix numerous pageout daemon -> buffer cache deadlocks in the main system. HAMMER 59G/Many: Stabilization pass (low memory issues) HAMMER 59H/Many: Stabilization pass HAMMER 59I/Many: Stabilization pass HAMMER 59J/Many: Features HAMMER 60A/many: Mirroring work HAMMER Utilities: Mirroring and pseudo-fs directives HAMMER 60B/many: Stabilization HAMMER Utilities: Sync with recent work. HAMMER Utilities: Stabilization HAMMER 60C/many: Mirroring HAMMER 60D/Many: Mirroring, bug fixes Error out if no volumes are specified instead of core-dumping. HAMMER 60E/Many: Mirroring, bug fixes HAMMER Utilities: Sync with 60E When creating a new HAMMER filesystem also create a PFS record for it, HAMMER 60F/Many: Mirroring HAMMER Utilities: Sync with 60F Rename fid_reserved to fid_ext. UFS+softupdates can build up thousands of dirty 1K buffers and run out HAMMER 60G/Many: Mirroring, bug fixes Add the HAMMER filesystem to GENERIC and VKERNEL. Cleanup - move a warning so it doesn't spam the screen so much, cleanup HAMMER 60H/Many: Stabilization pass HAMMER 60I/Many: Mirroring HAMMER Utilities: Sync with 60I HAMMER Utilities: Mirroring work HAMMER 60J/Many: Mirroring HAMMER Utilities: Sync with 60J Add crc32_ext() - allows continuation of a 32 bit crc. HAMMER 61A/Many: Stabilization HAMMER 61B/Many: Stabilization HAMMER 61C/Many: Stabilization Add a vclean_unlocked() call that allows HAMMER to try to get rid of a Correct a bug in the last commit. HAMMER 61D/Many: Mirroring features HAMMER Utillities: Sync with 61D HAMMER 61E/Many: Stabilization, Performance HAMMER Utilities: Cleanup HAMMER 61F/Many: Stabilization HAMMER 61F2/Many: Fix bug in last commit HAMMER 61G/Many: Stabilization of new flush_group code Use a 64 bit quantity to collect file size data instead of HAMMER 61E/Many: Stabilization, Performance Fix an asf core dump. Kernel support for HAMMER: HAMMER 61F/Many: Stabilization w/ simultanious pruning and reblocking HAMMER Utilities: Features 2.0 Release Engineering: 2.0 Release Engineering: 2.0 Release Engineering: NFS performance fixes. HAMMER 61E/Many: Features HAMMER Utilities: Sync with 61E Fix a bug in vmntvnodescan() revealed by the recent NFS sync fix. The Fix a bug where mount_nfs would properly parse an IP address, but would Fix an issue where libthread_xu was not accepting the full priority HAMMER 61G/Many: Stabilization HAMMER 61H/Many: Stabilization Fix a lock leak in nfs_create(), tracked down from a crash dump and HAMMER 62/Many: Stabilization, performance, and cleanup Add logic to warn of possible renames, and clearly state when failures may In DragonFly, gpt partitions look like slices in /dev, and we match the Code documentation only: Describe B_NOCACHE Give krateprintf() an initial burst capability if count is set to Make some adjustments to the buffer cache: Fix multiple bugs in CAM related devices which go away unexpectedly. This When dealing with a failed read properly set B_INVAL. HAMMER 63/Many: IO Error handling features Detach the SIM when a firewire disk device is disconnected. Leave the Try to make fwohci work more reliably. Stop printing 'phy int' to the O_CREAT was being allowed to leak through a read-only NFS export. HAMMER 64/Many: NFS, cross-device links, synctid HAMMER 65/Many: PFS cleanups and confusion removal HAMMER Utilities: Sync with HAMMER 65. HAMMER Utilities: Sync with HAMMER 65. Change 'default' to 'English' Synchronize some of the machine-independant AMD64 bits. Synchronize some of the machine-independant AMD64 bits. Change newfs_hammer to reserve a minimum of 100M for the UNDO FIFO. Any HAMMER commit An off-by-one malloc size was corrupting the installer's memory, Adjust the desiredvnodes (kern.maxvnodes) calculation for machines HAMMER - fix kmalloc exhaustion w/ 3G ram Pass the correct string to md_mount() when doing a diskless nfs mount. Add a terrible hack to GPT which allows non-EFI BIOSes to boot from it. Add a quick entry for the new 'gpt boot' directive. HAMMER: Mirroring, misc bug fixes HAMMER Utilities: Streaming mirroring! HAMMER Utilities: Cleanup HAMMER: Mirroring work Cast to unsigned long to match the rest of the expression. This is just AMD64 work: Create an #include layer for bus/pci and bus/isa so source files do not Add amd64 files for the ISA and PCI busses and adjust the header files HAMMER: fsync blocking fixes Fix a panic on boot that can occur if you hit keys on the keyboard Keep UFS compatible on 32 and 64 bit builds by changing the 'time_t' embedded HAMMER 2.1:01 - Stability HAMMER 2.0:02 - rmdir, stability Implement a bounce buffer for physio if the buffer passed from userland Remove daddr_t dependancies in the swap code. Move swblk_t and add Don't bump intr_context when running from the softint. Hopefully this AMD64 Support: AMD64 Support: AMD64 Support: AMD64 Support: AMD64 Support: AMD64 Support: Add memset() to help w/amd64 support. * Add a flag to track an in-transit socket abort to avoid races when closing Adjust the mcontext code to match amd64. Adjust the mcontext code to match amd64. AMD64: Fix bugs in cerror(). AMD64 - Sync AMD64 support from Jordan Gordeev's svn repository and Back-out the tls change. The field must be signed or the system will not Add BUF_CMD_FLUSH support - issue flush command to mass storage device. HAMMER: Mass storage flush command support AMD64: Add JG64 config file for testing purposes. AMD64: Fix the crossworld build. Flesh out BUF_CMD_FLUSH support. * Move /kernel to /boot/kernel and /modules to /boot/modules. Remove any vestiges of the old pam, particularly /etc/pam.conf. pam config Continue working the abort path. Move SS_ABORTING flag handling inward Adjust the boot code to boot from either a root with a /boot, or directly Remove boot0gpt - it isn't ready yet (and may never be). Bring hunt in from OpenBSD. The best multi-player terminal game ever! Update the rconfig examples. Correct a bug in auto.sh and add hammer.sh. Add a reference for /usr/share/examples/rconfig to the rconfig manual page. Fix a crash on access to TAP's owning thread. The owning thread can go away Fix an endless recursion and double fault which could occur when accessing Bump up the size of the boot partition in the boot+HAMMER rconfig Add missing sleep. Increase sockbuf send and receive buffers to 57344 bytes. In particular, Fix issues with the scheduler that were causing unnecessary reschedules Add a MSGF_NORESCHED feature for lwkt thread-based message ports. The Fix bug in hammer mirror command when used with a remote source Augment loader.conf's manual page to describe hw.usb.hack_defer_exploration Improve code flow for KASSERT and KKASSERT using __predict_false(). Fix an invalidation case that tends to occur under load on NFS servers or * Implement the ability to export NULLFS mounts via NFS. Unbreak buildworld a slightly different way. Adjust null.h to not Add the 'hammer cleanup' command. This is a meta-command which will If snapshots are disabled and the snapshots directory contains no * Fix a bug in runcmd() - the argv[] list was not NULL terminated. HAMMER: Fix a couple of minor non-corrupting bugs in HAMMER. Make two more changes to the ata request queue sorting code. Change the autoflush code to autoflush when a flush group reaches a Fix a double-flush which was occuring for every unlinked inode, resulting Rename the PFSD structure's prune_path[64] to snapshots[64]. sleep for a shorter period of time when an excessive number of inodes are Additions to 'hammer pfs-*': Do not loop forever if narg becomes negative. This can happen when asked Add vop_stdpathconf and default it so filesystems do not need to declare Use the new vop_stdpathconf() instead of rolling our own. Linux emulation adjustments. Add support for "RealTek 8102EL PCIe 10/100baseTX". Checksum support Do not return an EINVAL error for certain abort and disconnect cases. HAMMER Utilities: Adjust 'show' defaults. Add code to verify the data CRC by default, in addition to the B-Tree Change hammer_str_to_tid() and its callers to restrict the format of Add a new utility called 'monitor' which uses kqueue to monitor a Add KQUEUE support to HAMMER. Add vfs.nfs.flush_on_hlink and default to off. Allow an alignment default of 0 to be treated as 1. Try to do a better job aborting active requests when a usb mass storage Add kmalloc_raise_limit() - allow a subsystem to raise the allocation Raise the kmalloc limit for the M_HAMMER_INO pool based on desiredvnodes The priority mask used to compartmentalize the comparison to determine Add a new feature to mirror-copy and mirror-write. If the target PFS does Move the HAMMER option around a little and add ALTQ to the default VKERNEL Add required range checks prior to kmalloc()ing socket option buffer space. Fix flags handling for the install program. Properly set and clear flags * Change hc_remove() to return a -errno if an error occurs instead of -1. Fix a bootstrapping issue, UF_NOHISTORY may not exist on older systems. HAMMER utilities: HAMMER: HAMMER: Correct minor oops in last commit. Do not allow downgrading. Allow an error code to be returned in the head.error element of the ioctl. HAMMER Filesystem changes: Merge branch 'master' of ssh://crater.dragonflybsd.org/repository/git/dragonfly into devel Matthias Schmidt (142): Hello :) o Change my personal copyright to the DragonFly copyright Add 'H' to usage() options. This was missing from the last commit. Don't randomize fortune dat files during build time. fortune will display Pass NULL instead of getprogname() to pidfile(). pidfile() will automatically Test cvs add :) Enhance the comment Sync the adduser(8) command with FreeBSD. Summary: Sync etc/periodic with FreeBSD. Short summary: Update the man page to reflect the recent sync with FreeBSD. Remove bogus non-reentrant "temporary" implementation of gethostbyaddr_r() Remove 3rd clause of Berkely copyright, per letter. Don't rename 4. to 3. Enclose O_ROOTCRED in _KERNEL_STRUCTURES. This is needed for the upcoming Update kdump(1) to print more human readable information. o Add missing dot (.) Update to get entries for the Sun Grid Engine and iSCSI. Mention /etc/firmware for firmware(9) image files. Add pam_nologin(8) to the tree. pam_nologin verifies o Mention pam_nologin(8) in nologin(5). Only adapt the changes from FreeBSD Major update to pkg_search(1) Move the following entries from kern to security Renamed kern.ps_showallprocs to security.ps_showallprocs Commit pkg_radd(1) on behalf of corecode@. pkg_radd is a wrapper for If -m is specified, a newfs(8) command is printed that can be used to Add pam(3) support for cron(8). cron uses pam to check if the user's account Sync the passive fingerprinting database with OpenBSD to get support for Remove reference to the FreeBSD developers handbook. We have a chapter Sync with FreeBSD. This bings us the -m option to look for a specific Use the new kldstat -q/-m options instead of "| grep" Replace home-grown list iteration with methods from sys/queue.h Add support for network devices found on Intel ICH9 hardware. I have one Warn the user if he uses -e and procfs(5) is n | http://leaf.dragonflybsd.org/mailarchive/commits/2008-12/msg00037.html | crawl-002 | en | refinedweb |
I (very foolishly) spent a few minutes today trying to figure out why applying WCF tracing configuration to a ADO.NET Data Services client (i.e. a proxy generated with webdatagen.exe) wasn't producing me any tracing results.
It didn't take too long to realise that the client proxy isn't actually a WCF proxy. It just uses HttpWebRequest directly.
If you want WCF tracing, put your configuration onto your service side code.
I didn't actually find WCF that useful in tracing messages here and so fell back to inserting a proxy and tracing at the HTTP level.
That didn't work perfectly for me either - I found most value by taking my Entity Framework class (i.e. the class that derives from ObjectContext) and then adding another class which derives from that, handling the SavingChanges event in the constructor and then sticking a breakpoint in my event handler server-side in order that I could have a look at the ObjectStateManager as calls come in from Data Services layer.
What I mean here is...
public class MyContext : demoEntities
{
public MyContext()
{
this.SavingChanges += OnSavingChanges;
}
void OnSavingChanges(object sender, EventArgs e)
{
Debugger.Break();
// Now we can use the debugger to look at the object
// state manager.
ObjectStateManager m = this.ObjectStateManager;
}
}
where demoEntities is the ObjectContext-derived class that the EF tooling spits out for me from my database. That class is a partial class but the generation tool already throws out a default constructor so I thought it would be "better" to derive here. | http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/01/02/10058.aspx | crawl-002 | en | refinedweb |
Problem : To generate all r-Permutation with repetitions of a set of distinct elements
Before we start discussing about the implementation we will go through the basic definitions of Permutations. Then we discuss the method to generate r-Permutations with repetitions with examples, and at last we implement a C Language Program of the problem. Continue reading.
Definitions
- Permutation:
A Permutation of a set of distinct objects is an ordered arrangement of these objects.
Let A = {1,2,3} , then {1,2,3}, {1,3,2} , {2,1,3}, {2,3,1}, {3,2,1}, {3,1,2} are the permutations of set A
- r-Permutation:
An r-Permutation of a set of distinct objects is an ordered arrangement of some of these objects. Let there be n distinct elements in a set, then a permutation of r elements selected from that set is an r-Permutation of the set.
Let A = {1,2,3} then {1,2},{2,1}{1,3},{3,1},{2,3},{3,2} are the 2-permutations of set A
- r-Permutation with repetitions:
An r-Permutation with repetitions of a set of distinct objects is an ordered arrangement of some of these objects, each allowed to be appear more than one time.
Let A = {1,2,3} then {1,1},{1,2},{1,3},{2,1},{2,2},{2,3},{3,1},{3,2},{3,3} are the 2-permutations of set A
Let there be n elements in a set and we need to generate r all permutations of that set. Then by product rule we get there would be nr such r-Permutations. For example if n = 10 and r = 3, then the first object of the permutation could be any of the 10 objects of the set. For each object in the first position of the permutation, the second position can have any 10 objects of the set, and for each object in the second position there can be also 10 objects in the third position of the permutation. That makes 10 * 10 * 10 = 1000 = 103 permutations.
The Method
When we count the numbers in decimal number system, we actually generate r-permutation with repetitions with the ten decimal digits (0,1,2,3,4,5,6,7,8,9). Similarly when we count in binary we generate r-permutation with repetitions with the two bits (0 and 1) , and the same with octal and hexadecimal. In case of hexadecimal we use the first 6 alphabet as the last six symbols from the alphabet. Each position in a number runs from its minimum base value to its maximum base value. If there is a set with n symbols and we need to generate all r length permutations of that set, we will simply generate all the r length n base numbers and use the different symbols from the set to denote each different value. This technique is described elaborately with examples below.
Description
In a number the rightmost digit is called the LSB or the Least Significant Bit or Position and the leftmost digit is called the MSB or the Most Significant Bit or Position. (Here Bit, Digit and Symbols are used as the same thing). The left positions are of more significance and the right positions of a position is of more significance. Now note how a number is counted:
Let there be a 4 digit decimal number, and it starts from 0000. First the LSB (0th position) of the number will increase while it does not attain its maximum value, 9 . Thus we count the numbers 0000 to 0009. Next when we attempts to increase 9, we see that it already at the maximum value that a decimal digit can have, so the LSB is reset to 0 and the Next significant digit at position 1 is incremented from 0 to 1 and the number 0009 becomes 0010. Again the LSB increases from 0 to 9, thus counting from 0010 to 0019. In the next count the LSB is again reset and the Next significant position (position 1) is increased making it 0020. Similarly the LSB keeps on counting 0 to 9 and then again 0. At each 9 to 0 reset the Next more significant bit, the 1st digit increments. Let us see some more examples to make the thing clearer.
Let us consider the number is 0090 we will attempt to increment it. In this case the LSB will count from 0 to 9, and then reset to 0, then the next significant digit at position 1 attempts to increase, but it is already at 9 the maximum value, so position 1 resets to 0 , this triggers the 2nd position to increment from 0 to 1 thus making 0100.
In case of incrementing a number say 0099999, the 0th position will reset to 0 making the 1st position to increment which also resets to zero triggering 2nd position to increment, but this will also resets to zero and the 3rd position increments which also resets making 4th position to increment which again resets and increments 5th position from 0 to 1 and making the number 0100000 .
If there is a number 00969 the the LSB will be reset to 0 (it already is on it’s maximum value) and trigger the increment of 1st position making it 6 to 7 thus the number will become 00970. So a position will only increment if its next less significant position resets, and a position will reset when tried to increment it beyond the maximum value it can attain.
For an Octal number where each digit’s minimum and maximum attainable values are 0 and 7 respectively. In hexadecimal each digit can attain a minimum of 0 and the maximum of 15. The values from 10 to 15 are labeled with some symbols from the alphabet which are A to F respectively. Similarly say a 26 base number system will run from 0 to 25 (or 1 to 26) and we can assign the lowercase alphabets to each value.
The basic thing is a digit in a certain position will increase when the digit on its right side (the next less significant position) is reset, in other words when a digit resets from its maximum value to its minimum the next more significant digit will increase one count. Thus the reset of each digit triggers the increment of the next more significant position digit.
Now just think that you are given 10 characters say “abcdefghij” and you are told to generate all 4-permutation with repetitions of this set, that is generate all possible permutations of length 4 with repetitions. There will be a total of 1010 permutations. Can you find the similarity of this problem with counting all four digit decimal numbers from 0000 to 9999 ? If we just relabel the ten digits of the decimal number system with the characters in the set, and instead of printing out digit symbols we print out the characters, then we have the solution.
If we label 0 with a , 1 with b , 2 with c …. 9 with j , then the number 1654 will represent the string “bfed” , 6874 will represent “fhgd” etc. So counting up from 0000 to 9999 and labeling each number with the set characters will generate all r-permutation with repetitions of the set.
Notice that this problem does not depend on which characters you have to permute, because you can assign any label to the corresponding numbers. This depends the cardinality of the set, that is the number of characters i the set to permute. If there are 8 characters say “oklijhut” in the set to permute then the problem would be same , but the count would be like counting octal or 8 base number and then assigning labels to corresponding digit values.
Let us say there are 13 characters to permute, say “qwertyuiopasd” . Then the count would be like a 13 base number system. The minimum attainable value of a digit position would be 0 and the maximum attainable value would be 12 (0 to 12 are thirteen values). The LSB would increase from 0 to 12 and then reset to 0 making the next significant position to 1, and the counting would proceed as described above and then labeling each value with corresponding labels from the character set will generate the r-permutations of the set. If a number in 13 base is (6)(10)(9)(12)(5) then we get “ypost” after labeling each value with its corresponding symbol from the set.
Generally, if the set with which we need to generate r-permutation with repetitions has cardinality n , then we need to generate all r length numbers of base n number system, and then label each values of the number system with the elements of the set. Thus any arbitrary length of set can be permuted by generating all n base numbers with r length.
For example if the set consists of all alpha (upper case and lower case) numeric symbols and we need a 5-permutation with repetitions, then we will count all the 5 length numbers of a 26 + 26 + 10 = 62 base number system (a total of 62 5 = 916132832 permutations).
Program Implementation
The complete source code and it’s description is shown below
/* Program to generate all r-Permutation of a set with distinct element * This code is a part of */ #include <stdio.h> #include <math.h> #include <stdlib.h> #include <string.h> /* type values used in init_char_set() function */ #define UPPER 1 #define LOWER 2 #define NUM 4 #define PUNCT 16 #define ALPHA 3 #define ALNUM 7 #define ALL 23 #define CUSTOM 32 /* pre defined char sets used in init_char_set() function */ #define UPPER_CHAR_SET "ABCDEFGHIJKLMNOPQRSTUVWXYZ" #define LOWER_CHAR_SET "abcdefghijklmnopqrstuvwxyz" #define DIGIT_CHAR_SET "0123456789" #define PUNCT_CHAR_SET "~!@#$%^&*()_+{}|[]:\"<>?,./;'\\=-" #define MAX_CHARS 150 #define MAX_GUESS 20 #ifndef NUL #define NUL '' #endif /* Function Prototypes */ void permute_rep (const char *set, int n, int r); char *init_char_set (short int type, char *custom); int *init_perm_array (int len); void make_perm_string (int *perm, const char *set, char *perm_str, int len, int print_flag); /* Main Function, Drives the permute_rep() function */ int main (void) { char *set = NULL, custom[MAX_CHARS]; int r; printf ("\nr-Permutation with repetitions.\n"); printf ("Enter String Set To Permute: "); scanf ("%s", custom); printf ("\nLength Of Permutations (r): "); scanf ("%d", &r); set = init_char_set (CUSTOM, custom); printf ("\nPermutation Symbol set: \"%s\"\n", set); permute_rep (set, strlen (set), r); printf ("\nfinished\n"); return 0; } /* Function Name : permute_rep * Parameters : * @ (const char *) set : Pointer to the symbol sets to permute * @ (int) n : The length upto which the set would be used * @ (int) r : The length of the generated permutations * Return Value : (void) * Description : Generates all the Permutation with repetitions of length 'r' from the 'set' upto length 'n' . * Optionally prints them in stdout. */ void permute_rep (const char *set, int n, int r) { int *perm; char perm_str[MAX_CHARS]; int i, j; perm = init_perm_array (r); while (perm[r] == 0) { for (j = 0; j < n; j++) { make_perm_string (perm, set, perm_str, r, 1); perm[0]++; } perm[0]++; for (i = 0; i < r; i++) { if (perm[i] >= n) { perm[i] = 0; perm[i + 1]++; } } } } /* Function Name : init_char_set * Parameters : * @ (short int) type : The inbuilt type values to select character sets. * 'type' could be: * 1, 2, 4, 16, 32 or any of these values ORed. These are #defined * @ (char *) custom : Pointer to a custom symbol set to initialize. type should be 32 in, * else this pointer is ignored. * Return Value : (char *) : Returns a pointer to the initialized character set * Description : Allocates and initializes a pointer with a string of symbols to be permuted, and returns it */ char * init_char_set (short int type, char *custom) { char upper[] = UPPER_CHAR_SET; char lower[] = LOWER_CHAR_SET; char num[] = DIGIT_CHAR_SET; char punct[] = PUNCT_CHAR_SET; char *set; set = (char *) malloc (sizeof (char) * MAX_CHARS); if (type & UPPER) { strcat (set, upper); } if (type & LOWER) { strcat (set, lower); } if (type & NUM) { strcat (set, num); } if (type & PUNCT) { strcat (set, punct); } /* Remove redundant elements from custom string and build set. If input set is "hello" * then it will be reduced to "helo" */ if (type & CUSTOM) { int i, j, k, n = strlen (custom), flag; for (i = 0, k = 0; i < n; i++) { for (flag = 0, j = 0; j < k; j++) { if (custom[i] == set[j]) { flag = 1; break; } } if (flag == 0) { set[k] = custom[i]; k++; } } } return set; } /* Function Name : init_perm_array * Parameters : * @ (int) len : The length of the array * Return Value : (int *) : A pointer to the allocated permutation array * Description : Allocates and initializes with 0 an array, which is used for generating 'r' base numbers */ int * init_perm_array (int len) { int *perm; perm = (int *) calloc (len + 1, sizeof (int)); return perm; } /* Function Name : make_perm_string * Parameters : * @ (int *) perm : Pointer to the current permutation count state * @ (const char *) set : Pointer to the symbol set to be permuted * @ (char *) perm_str : Pointer to the string containing permutation * @ (int) len : The length of permutation * @ (int) print_state : A flag. If true prints the permutation in stdout, else does not. * Return Value : (void) * Description : Makes a NULL terminated string representing the permutation of the symbols, * from the 'set' represented by 'perm' state. This labels each position of 'perm' * with the symbols from 'set' makes a string and returns it. * Also prints the string if 'print_state' is true. */ void make_perm_string (int *perm, const char *set, char *perm_str, int len, int print_state) { int i, j; for (i = len - 1, j = 0; i >= 0; i--, j++) { perm_str[j] = set[*(perm + i)]; } perm_str[j] = NUL; if (print_state) printf ("%s\n", perm_str); }
Description of permute_rep() function:
This receives a const char type pointer set which contains the symbols which are to be permuted, the value of n an integer which indicates the length of the set to be used (generally strlen(set)) and also determines the number system base which would be counted, and it accepts r that is the length of the generated permutations (the value of r in r-permutation). So if we want to generate 4-permutations with repetitions of a set “qwerty” then the call would be
Note: If you think to use the whole length of the string you pass everytime, then you may calculate strlen(set) inside permute_rep() and omit passing n.
permute_rep ("qwerty",sizeof("qwerty"),4);
The permute_rep() function first executes the perm = init_perm_array(r) which allocates an integer array of length r and returns its base address.
The outer while loop controls the count and limits the length of the count to r. This works like this: When r = 4 , we are only using position 0,1,2 and 3 . After the count 09999 the next number is 10000 , that is the count goes beyond position 4 making position 5 to change to 1, indicating all 4 permutations have been generated. This acts as a control flag. That is why an extra cell is allocated to perm in the init_perm_array() function. This loop also prints the generated permutation with the help of make_perm_string() (passing print_state = 1).
The first for loop counts the LSB from its minimum to maximum. The next statement perm[0]++ makes the LSB exceed it’s maximum attainable value then making it invalid (for the current). That is if n = 18 (when permuting 18 element set) then the first for loop will count the LSB from 0 to 17 and the next statement will take the LSB from 17 to 18, making it invalid. This invalid position is detected in the next for loop and it is reset to zero and the next significant place is incremented and checked again for an invalid value. That is the second for loop is used to reset and increment all the positions except the LSB. This loop runs until the number has not end or it has detected a valid value.
Other Functions:
Two other functions are defined which are make_guess_string() and init_char_set(). The make_perm_string() function simply labels the values in the perm array with the different symbols from set. Although the make_perm_string() constructs the string in reverse order, it is not needed. the print_state parameter tells the function if to also print the generated permutation or just return the string. This could be helpful generating brute force password hashes and attempting to crack a password. init_char_set() initializes the set of symbols to be permuted. This is made for easy using. Inbuilt types are defines like UPPER, LOWER,NUM,PUNCT,ALNUM,ALPHA,ALL and CUSTOM. Calling this with
set = init_char_set (UPPER, set, NULL);
will allocate and initialize the pointer set with uppercase alphabets
set = init_char_set (UPPER | LOWER, set, NULL); or set = init_char_set (ALPHA, set, NULL);
will allocate and initialize set with uppercase and lowercase alphabets
set = init_char_set (CUSTOM, set, "qwerty");
will allocate and initialize set with “qwerty”
This function also removes redundant symbols from the custom set. that is “aabbccdd” is considered “abcd” . The last parameter is ignored if type is not CUSTOM.
A 20 Digit Decimal Counter
We will make a 20 Digit or more digit decimal counter with the help of the above process. It can simply be seen that is the set = "0123456789" and r = 1,2,3 ... 20 , the above program could generate this thing.
for ( i = 0; i < n; i++) permute_rep (set, strlen(set), i);
What we will do is reduce the above function and design to only count decimal numbers, and remove additional functions to label the permutations and save computation. We will initilize the permutation array with the symbols itself. Because the decimal symbols (0 to 9) are adjacent in the ASCII table we can just increment normally to get the next count, and reset is done with the ASCII value of 0. In this case we need to generate permutation in a certain order, for example a decimal counter which is capable to count 20 digit number or a 50 digit number. Normal data-types would overflow several times counting such a large number. which assigns line numbers to a very long file.
This method is applied in the GNU cat program with the -n or -b switches are on. A sample of a 20digit decimal counter is presented below:
void generate_line_number (void); #define LSB 20 /* pre calculated value 22 - 2, skipping the last two places */ #define LENGTH 23 /* length is 22 valid number length 20 last two positions for null and \t */ char line_number[LENGTH] = { ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', /* Upper 10 digits */ '\r', ' ', ' ', ' ', '\r', ' ',' ', ' ', ' ',' ', /* Lower 10 digits */ '0', '\t' /* Null terminated and tab character */ }; void generate_line_number (void) { int i; line_number[LSB]++; if (line_number[LSB] == ':') { for (i = LSB; i >= 0; i--) { if (line_number[i] == ':') { line_number[i] = '0'; if (line_number[i - 1] <= ' ') line_number[i - 1] = '1'; else line_number[i - 1]++; } else break; } } }
Note that in this implementation the LSB is the rightmost position. The line_number is a pre-formatted array. Each time this function is called it generates a new decimal number. This code was used in the cat equivalent program of Whitix OS project. This does not needs to construct the permutation, as the line_number is already a NULL terminated and tab aligned string.
Update Information
14.10.2009 : Source code update. Memory leak fixed.
Pingback: All Combinations Without Repetitions « Phoxis
Pingback: wcat : A GNU cat implementation | Phoxis
Hello,
The code you posted , some symbols were lost because the wordpress.com system replaced them with the HTML equivalent code. I have edited then and tried to compile the code. It does not compile here it shows some errors. Probably the first two statements would be cin and not cout . After i fixed the errors, when i ran the code it showed me Segmentation Fault . That means there is some invalid memory usage.
Could you please fix the code, so that i can add that here ? And please remove conio.h, as most of the readers would have problem with that library.
Pingback: Gerando Permutações-r com Repetição em C | Daemonio Labs
Thank you .
i will try it
could you please tell me step by step what to do to generate permutations using this code?
how to use it ?
note that this specific problem implements an r-permutation with repetitions. That is, a set of symbols is given, the routine would select all possible subsets of k symbols, and selection of one single symbol from the set is allowed. Each selected symbol is treated as a unique even they may be identical. This can be used to form a counter of base n as the last decimal counter describes.
For a description of the code, probably i have tried to writeup an exhaustive description of the process. I would tell you to go through the code.
If you need to permutation of n symbols without repetitions, ie, something like
: 123, 132, 213, 231, 312, 321 which is something like an n-permutation without repetition, selecting all the symbols and rearranging them in unique positions, then this is not the implementation. | http://phoxis.org/2009/08/01/rpermrep/ | CC-MAIN-2013-20 | en | refinedweb |
View all questions by Debananda
Answered On : Sep 13th, 2009
View all answers by jv500
#include <stdio.h>#include <stdlib.h>#include <string.h>/* Program to print all combinations of an input */char *globalPtr;void swap (char *a, char *b){char t = *a; *a = *b; *b = t;}void displayComb (char *s, size_t len){int i; if (s == NULL) { return ; } if (len == 1) { printf ("%sn", globalPtr); return ; } displayComb (s+1, len-1); for (i = 1; i < len; i++) { swap (s, s+i); displayComb (s+1, len-1); swap (s+i, s); } return ;}
If you think the above answer is not correct, Please select a reason and add your answer below.
C
C++
What will you do when you are not able to get your message across to your audience?
What do you do when things do not go as planned? Cite an example.
What do you think about our company?
How to Overcome Employment Gap during interview.
What are the career options available in Software Testing Industry?
Do you like working in a team or alone? Explain.
Tell me three things about you, which are not found in your resume
How do you approach a problem?
Do you seek advice from others when you have a question?. | http://www.geekinterview.com/question_details/55258 | CC-MAIN-2013-20 | en | refinedweb |
02 March 2010 08:15 [Source: ICIS news]
SINGAPORE (ICIS news)--Indian base oils market players said on Tuesday they were dismayed by the unexpected hike of import duty on base oils to 10% from 5% in the country's budget announced last week.
The Indian budget - announced on 28 February - also incorporates a hike in import duty on crude oil from zero to 5%.
“The reduction in duties implemented by the Indian government in 2008 at the height of oil price boom has effectively been reinstated,” a large Indian lubricant blender said.
Another base oils buyer said: "The import duty hikes on base oils were totally unexpected and shocking."Base oils are used for making automotive and industrial lubricants, process oils like transformer oils and white oils.
?xml:namespace>
Import offers for group I base oils of Russian origin were heard around $760-765/tonne (€562-566/tonne) CFR (cost and freight)
Local base oils refineries like Indian Oil raised domestic prices of the product by Rs1000-2400/tonne ($22-52/tonne) on 2 March citing higher costs due to increased import duty on crude.
($1 = €0.74 / $1 = Rs46)For more. | http://www.icis.com/Articles/2010/03/02/9338923/indian+base+oils+import+duty+hike+takes+market+by+surprise.html | CC-MAIN-2013-20 | en | refinedweb |
6 August 2012
By clicking Submit, you accept the Adobe Terms of Use.
Experience building games for iOS with Flash Builder or Flash Professional and Adobe AIR will help you make the most of this article.
Additional required other products
Milkman Games EasyPush Notifications Extension for iOS
Intermediate
Push notifications are a powerful feature for engaging users in mobile apps, enabling you to share news and updates, bring lapsed users back to your app, and even update the state of an app remotely.
Setting up push notifications can be a complicated affair, but with the EasyPush Notifications native extension for iOS, you can implement push simply and efficiently:
Before you can use push notifications on iOS, you'll need to create a new App ID specifically for your push-enabled app, create a special provisioning profile for your app, and set up a back-end certificate for the server.
Note: When you create your app in iTunes Connect before publishing to the App Store, be sure to choose this App ID so push notifications will be enabled for production.
Push notifications require a special SSL certificate that allows a back-end server to manage notifications for your app. There are separate certificates for development and production. You can use these certificates with a service provider such as Urban Airship so you won't have to run your own servers.
The Apple Push Notification service SSL Certificate Assistant will appear to help you generate a Certificate Signing Request (see Figure 4). You have already gone through this process when you began development to generate your Adobe AIR signing certificates; you'll need to repeat it now to generate the new Push SSL certificates.
You'll need to repeat this process later to generate a Production SSL certificate before publishing your app to the App Store.
When using push notifications, Apple requires you to generate a separate provisioning (.mobileprovision) file for each application you want to create. Be sure you've completed the steps above and waited a few minutes before creating your provisioning profile.
You'll need to repeat these steps to create a Distribution Provisioning Profile, using your production SSL key, when you're ready to submit the app to the App Store.
The EasyPush Notifications native extension is designed so that it can be quickly and easily integrated with Urban Airship, a popular provider of back-end services for managing, distributing, and testing push notifications, that offers both free and affordable plans.
The EasyPush Notifications native extension requires the AIR 3.3 SDK or a later version. You can download the latest AIR SDK from. If you haven't already installed the AIR 3.3 SDK for your Flash Professional CS6 or Flash Builder IDE, follow the instructions below.
<name>tag to
Flex 4.6.0 (AIR 3.3).
Enabling the AIR 3.3 SDK in Flash Builder 4.6 on OS X
sudo cp -Rp /Applications/Adobe\ Flash\ Builder\ 4.6/sdks/AIR33SDK/ /Applications/Adobe\ Flash\ Builder\ 4.6/sdks/4.6.0/
<name>tag to
Flex 4.6.0 (AIR 3.3).
The most time consuming part is done. Now you're ready to write some code!
The next step is to add the com.milkmangames.extensions.EasyPush.ane file (or EasyPushAPI.swc for FlashDevelop) to your project. These files can be found in the extension folder of the EasyPush Notifications extension package.
In Flash Professional CS6:
In Flash Builder 4.6:
In FlashDevelop:
You can start using the EasyPush Notifications extension with a few simple calls. See example/EasyPush.as for a full example class.
Follow these steps to get started:
import com.milkmangames.nativeextensions.*; import com.milkmangames.nativeextensions.events.*;
isSupported()and
areNotificationsAvailable()methods. If they return
false, your app won't be able to use push notifications (either the device does not support them or the user has disabled them.)
If notifications are supported and available, you can initialize Urban Airship by calling the
initAirship() method. This should be the very first thing your code does after the app starts to ensure any messages are properly received.
if(EasyPush.isSupported() && EasyPush.areNotificationsAvailable()) { // "YOUR_AIRSHIP_KEY” - put your application key from urban airship here // "YOUR_AIRSHIP_SECRET” - put the app secret from urban airship here // true – sets development mode to ON. Set to false only when publishing for app store. // true – enables automatic badge number tracking on the icon (optional) // true – will automatically show alert boxes when a push is received. EasyPush.initAirship("YOUR_AIRSHIP_KEY”,”YOUR_AIRSHIP_SECRET”,”airship”,true,true,true); } else { trace("Push is not supported or is turned off...”); return; }
The
initAirship() method takes six parameters. The first two are your app's Application Key and Secret, which you get from the Urban Airship website as described earlier. The third string value is reserved for future API use; you can use any string here.
The next three Boolean parameters are
developmentMode ,
autoBadge , and
autoAlertBox , in that order.
Setting
developmentMode to
true enables Development mode for your app- so that the Urban Airship server will transfer messages between its own sandbox server, and Apple's sandbox server, instead of using production URLs You'll want to set this to
false when you are ready to publish to the app store.
Setting
autoBadge to
true will cause the notification badge number on your app's icon to be controlled automatically by Urban Airship.
Setting
autoAlertBox to
true will cause an AlertBox to appear in the native UI if the app receives a push while it's running. You can set it to
false if you want to handle those messages with your own UI.
That's all you need for receiving basic push notifications!
Urban Airship lets you associate an alias or group of tags with a user. This is useful for targeting push notifications to a specific user or group of users.
You can set an alias that's unique to the user to make them easily identifiable for push notifications. For instance, if you're using the GoViral extension to log the user into Facebook, you might want to set their alias to their Facebook ID, so you can send direct notifications to that particular user; for example:
// sets the user's alias to "bob@internet.com”. // You'd want to be sure this is unique to your user though! EasyPush.airship.updateAlias("bob@internet.com”);
You can set tags to assign users to logical groups so that you can send selective push notifications to them later.
// create a vector array of tags var tags:Vector.<String>=new Vector.<String>(); tags.push("advanced"); tags.push("gamer"); EasyPush.airship.setAirshipTags(tags);
Quiet time is a period during which notifications will not be displayed. The following example silences push notifications for the next 15 minutes:
//Setting quiet time for next 15 minutes... var now:Date=new Date(); var inFifteen:Date=new Date(); inFifteen.setTime(now.millisecondsUTC+(15*60*1000)); EasyPush.airship.setQuietTime(now,inFifteen);
The EasyPush Notifications native extension dispatches several events that you may want to handle.
When the user starts your app and the extension is initialized, it will attempt to register them for notifications. As a result, either
PNAEvent.TOKEN_REGISTERED or
PNAEvent.TOKEN_REGISTRATION_FAILED will be dispatched. If registration succeeds, but some of the types of push requested are turned off (for instance, push messages are enabled, but not sounds), the
PNAEvent.TYPES_DISABLED event will fire. The following code illustrates how these events can be handled:
EasyPush.airship.addEventListener(PNAEvent.TOKEN_REGISTERED,onTokenRegistered); EasyPush.airship.addEventListener(PNAEvent.TOKEN_REGISTRATION_FAILED,onRegFailed); EasyPush.airship.addEventListener(PNAEvent.TYPES_DISABLED,onTokenTypesDisabled); function onTokenRegistered(e:PNAEvent):void { trace("token was registered: "+e.token); } function onRegFailed(e:PNAEvent):void { trace("reg failed: "+e.errorId+"="+e.errorMsg); } function onTokenTypesDisabled(e:PNAEvent):void { trace(“some types disabled:”+e.disabledTypes); }
There are two scenarios for receiving notifications. Either the app was open, and a push was received (
PNAEvent.FOREGROUND_NOTIFICATION ), or the app was in the background, and the user clicked the notification to open the app (
PNAEvent.RESUMED_FROM_NOTIFICATION ). In the former case, a message box with the notification will automatically be displayed to the user, if
autoAlertBox was set to
true in the
initAirship() call. If the user taps the OK button on this alert,
PNAEvent.ALERT_DISMISSED will be dispatched.
These events contain extra information about the notifications that were sent. You can use this data to perform additional actions in your app.
EasyPush.airship.addEventListener(PNAEvent.ALERT_DISMISSED,onAlertDismissed); EasyPush.airship.addEventListener(PNAEvent.FOREGROUND_NOTIFICATION,onNotification); EasyPush.airship.addEventListener(PNAEvent.RESUMED_FROM_NOTIFICATION,onNotification); function onNotification(e:PNAEvent):void { trace(“new notification, type+"="+e.rawPayload+","+e.badgeValue+","+e.title); } function onAlertDismissed(e:PNAEvent):void { trace(“Alert dismissed, payload="+e.rawPayload+","+e.badgeValue+","+e.title); }
In your application descriptor file, you need to specify the version of the AIR SDK you are using (3.3 or later) as well as a link to the extension. For a working example, see example/app.xml.
<application xmlns="">
<extensions> <extensionID>com.milkmangames.extensions.EasyPush</extensionID> </extensions>
<id>property exactly matches the App ID you created in the iOS Provisioning Portal.
entitlementselement to the application XML file and include in it the App ID you created in the iOS Provisioning Portal. The App ID consists of a string of random letters and numbers followed by your App Bundle ID. For instance, it might be something like "Q942RZTE24.com.yourcompany.yourgame". Copy this exact string twice into the
<iphone>block of your application descriptor, like so:
<iPhone> <InfoAdditions> <![CDATA[ <key>UIDeviceFamily</key> <array> <string>1</string> <string>2</string> </array> ]]> </InfoAdditions> <Entitlements> <![CDATA[ <key>application-identifier</key> <string>Q942RZTE24.com.yourcompany.yourgame</string> <key>aps-environment</key> <string>development</string> <key>get-task-allow</key> <true/> <key>keychain-access-groups</key> <array> <string>Q994RZTE24.com.milkmangames.pushexample</string> </array> ]]> </Entitlements> </iPhone>
developmentto
production, and remove the entire
<key>get-task-allow</key><true/>line.
If you're using Flash Builder 4.6 or later, or Flash Professional CS6 or later, and have added the EasyPush Notifications.EasyPush.ane file.
Here is an example build command line:
c:\dev\air_sdk_33.
Once you've built your app and successfully installed it on your device, it's easy to send a test push notification with Urban Airship:
aTitleproperty to the JSON Payload (see Figure 9).
That's it! The notification should trigger on your device. To send a notification to multiple devices, use the Send Broadcast option instead.
Now that you have push notifications up and running in your app, you may want to explore these other native extension tutorials:
For additional extensions, see More Native Extensions from Milkman Games. | http://www.adobe.com/devnet/air/articles/ios-push-notification-ane.html | CC-MAIN-2013-20 | en | refinedweb |
A force that acts on a PhysicsObject by way of an Integrator.
More...
#include "linearForce.h"
List of all members.
A force that acts on a PhysicsObject by way of an Integrator.
This is a pure virtual base class.
Definition at line 25 of file linearForce.h.
Destructor.
Definition at line 57 of file linearForce.cxx.
[protected]
Default/component-based constructor.
Definition at line 30 of file linearForce.cxx.
copy constructor
Definition at line 42 of file linearForce.cxx.
[inline, static]
This function is declared non-inline to work around a compiler bug in g++ 2.96.
Making it inline seems to cause problems in the optimizer.
Reimplemented from BaseForce.
Reimplemented in LinearControlForce, LinearCylinderVortexForce, LinearDistanceForce, LinearFrictionForce, LinearJitterForce, LinearNoiseForce, LinearRandomForce, LinearSinkForce, LinearSourceForce, LinearUserDefinedForce, and LinearVectorForce.
Definition at line 65 of file linearForce.h.
References BaseForce::init_type().
Referenced by LinearVectorForce::init_type(), LinearRandomForce::init_type(), LinearFrictionForce::init_type(), LinearDistanceForce::init_type(), LinearCylinderVortexForce::init_type(), and LinearControlForce::init_type().
[virtual]
Write a string representation of this instance to <out>.
Definition at line 97 of file linearForce.cxx.
0
Definition at line 110 of file linearForce.cxx. | http://www.panda3d.org/reference/1.7.2/cxx/classLinearForce.php | CC-MAIN-2013-20 | en | refinedweb |
Understanding Struts Controller
Understanding Struts Controller
In this section I will describe you the Controller.... It is the Controller part of the Struts
Framework. ActionServlet is configured
Generating PDF reports - JSP-Servlet
Generating PDF reports Hello everyone
i have submitted several question on this site but till now i have got no replies ...
let me ask my new Question
I am try to generate a pdf report using jsp .... i want to export
Jasper Reports - Java Beginners
Jasper Reports Hi,
I'm new to Jasper Reports. Please help me by giving a simple example of Jasper report generating.
Thank You,
Umesh .../),
it is free and based o JasperReports.
It lets you create sophisticated reports
Generating dynamic fields in struts2
Generating dynamic fields in struts2 Hi,
I want generate a web page which should have have some struts 2 tags in a group and a "[+]" button... to read those field values in controller? Please provide me some example
reports
reports hi i want to create reports in my projects .
plz give me some idea how to create reports in jsp
Struts PDF Generating Example
Struts PDF Generating Example
To generate a PDF in struts you need to use struts stream result type as
follows
<result name="success" type...;/param>
</result>
An example of PDF Generating is given below
reports creation
reports creation hi.................
how to create tabular format report in java swings?????????????
Please visit the following link:
Generating report in java
Generating report in java How can I generate a report in java
java programming:generating series
java programming:generating series WAP to print series:
| + || + ||| + |||| + .......... n terms
Generating pdf in Spring
Generating pdf in Spring Sir/Madam,
I need your help in generating a pdf by fetching the data form database and by using unicode in spring framework
Struts file downloading - Struts
Struts file downloading how to download a file when i open a file... is not showed even i check the file size also and generating excetion like.../struts/strutsfileuploadandsave.shtml
Thanks
Open Source Reports
easy with JasperReports
Generating reports is a common, if not always...Open Source Reports
ReportLab Open Source
ReportLab, since its early... (the ReportLab Toolkit) - our proven industry-strength PDF generating solution
Struts Alternative
-Controller (MVC) design paradigm. Most Struts applications use a browser as the client...
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework programming:generating series
java programming:generating series Write a program to generate series:
12345
1234
123
12
1
1
12
123
1234
12345
12345
Here is a code that displays the following pattern:
12345
1234
123
12
1
1
12
123
1234
java programming:generating series
java programming:generating series 12345
1234
123
12
1
1
12
123
1234
12345
Here is a code that displays the following pattern:
12345
1234
123
12
1
1
12
123
1234
12345
class Pattern
{
public
Java programming: generating series
Java programming: generating series Write a program to accept a string using buffered reader and replace character sequence 'cat' with 'dog'
Here is a java code that accept the string from the user using
Crystal clear reports
Crystal clear reports what is crystal clear and i-text reports in java? plz give me full information
java programming:generating series
java programming:generating series 1234554321
1234
4321
123
321
12
21
1
1
Here is a java code that displays the following pattern
1234554321
1234 4321
123 321
12 21
1 1
class Pattern
Create Crystal reports with PHP
Create Crystal reports with PHP I'm New to eclipse and php. I need to create a report using crystal report on php. is that possible. If it is, how could I install it to eclipse IDE. How to use
Generating bill in servlet - Development process
Generating bill in servlet I want to generate the bill using servlet for the resturant so any one please send me code
how to create reports in swing java?
how to create reports in swing java? how to create reports in swing java
generating time table - JSP-Servlet
generating time table hi friends, i want generate examination timetable for examinations of courses like btech(cse,ece,eee..etc),if i give starting date of examinaton then automatically generate timetable for all subjects
Generating password and id and triggering mail.
Generating password and id and triggering mail. I want a code for below mention situation
`print("Filling out the form and clicking on save button creates a new record in the system and a
unique id and a MD5 hash password
Jasper Reports - Development process
Hello - Struts
, instead of obtaining that data from external sources or generating data
generating random numbers - Java Beginners
generating random numbers We would like to be able to predict tomorrow's price of a share of stock. We have data on past daily prices. Based on that we will make a prediction. Our plan is to use a weighted average of the 5 most
Report Mill Studio editor
Report Mill Studio editor
ReportMill is the best Java application reporting tool available for dynamically generating reports and
web pages from Java applications
Task Scheduling in JAVA
, for example a application of report generating checks for new
database entry after one day and make reports according to the entries then save
all entries
How to Generate Reports in Java - Java Beginners
How to Generate Reports in Java How to Display and Generate Reports in Java?
Give Me Full Sample Code Hi Friend,
What do you want to display on reports.Please elaborate it.
Thanks
Struts 2.2.1 - Struts 2.2.1 Tutorial
2.2.1
Features of Struts 2.2.1
Understanding MVC design pattern... application
Miscellaneous Examples
Struts PDF Generating Example...Struts 2.2.1 - Struts 2.2.1 Tutorial
The Struts 2.2.1 framework is released
jsp - excel generating problem - JSP-Servlet
jsp - excel generating problem Hi,
I worked with the creating excel through jsp, which is the first example in this tutorial (generateExcelSheet.jsp). while running the program, the excel sheet is opening in download mode
Crystal Reports for Eclipse
Crystal Reports for Eclipse
Crystal Reports for Eclipse is an Eclipse Plug...
development environment can simply embed Crystal Reports into their java
namespace in struts.xml file - Struts
namespace in struts.xml file i not understand how namespace work in struts.xml file
Struts Problem Report
Struts has detected an unhandled..., enables extra debugging behaviors and reports to assist developers. To disable
generating unique combinations of length of x in a string of length l
generating unique combinations of length of x in a string of length l Generating unique combinations of length of x in a string of length l?
Suppose a string abcd in read from the console and we have to produce all unique
How to Create any type of Reports in Java - Java Beginners
How to Create any type of Reports in Java Hello Sir ,How I can create any type of Reports like Crystal Reports etc(Student Result Report) in Java Application,plz Help Me
generating itext pdf from java application - Java Beginners
generating itext pdf from java application hi,
Is there any method in page events of itext to remove page numbers from pdf generated frm java application. Hi friend,
Read for more information.
http
generating mock data class of times (start and end time and value)
generating mock data class of times (start and end time and value) Using the timertask function want to generate a set of mock data of times using the random DATE class and values for plotting on a graph using java.
How
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/comment/54471 | CC-MAIN-2013-20 | en | refinedweb |
!ATTLIST div activerev CDATA #IMPLIED>
<!ATTLIST div nodeid CDATA #IMPLIED>
<!ATTLIST a command CDATA #IMPLIED>
What I am trying to achieve:
If I need to shoot say 3 raycasts (C#): 1 from transform.position (blue line), 1 to the left of this by an offset of lets say 2 units (left yellow line), and 1 to the right by the same offset (right yellow line), how can I do this? I imagine it could be something like:
transform.position + publicVariable (and)
transform.position - publicVariable
Here is my script:
using UnityEngine;
using System.Collections;
public class EnemyAI3 : MonoBehaviour {
public Transform target;
public int raycastLength = 1;
public float leftRaycast = 2f;
public float rightRaycast = -2f;
public bool raycastHitting = false;
public bool raycastHittingL = false;
public float turnSpeed = 5;
private Transform myTransform;
void Awake() {
myTransform = transform;
}
// Use this for initialization
void Start () {
GameObject go = GameObject.FindGameObjectWithTag("Player");
target = go.transform;
leftRaycast.x -= 2;
rightRaycast.x += 2;
}
// Update is called once per frame
void FixedUpdate () {
//Start Raycast forward
if(Physics.Raycast(myTransform.position, myTransform.forward, raycastLength)){
Debug.DrawLine(myTransform.position, myTransform.forward, Color.blue);
myTransform.Rotate(Vector3.up, -90 * turnSpeed * Time.smoothDeltaTime);
raycastHitting = true;
}
if(!Physics.Raycast(myTransform.position, myTransform.forward, raycastLength)){
raycastHitting = false;
}
//End Raycast
//Start Raycast Left
if(Physics.Raycast(myTransform.position, myTransform.right * leftRaycast, raycastLength)){
Debug.DrawLine(myTransform.position, myTransform.right * leftRaycast, Color.yellow);
myTransform.Rotate(Vector3.up, 90 * turnSpeed * Time.smoothDeltaTime);
raycastHittingL = true;
}
if(!Physics.Raycast(myTransform.position, myTransform.right * leftRaycast, raycastLength)){
raycastHittingL = false;
}
//End Raycast
}
}
What is the best way to do this?
asked
Mar 07 '12 at 06:38 AM
Hamesh81
583
●
12
●
20
●
27
edited
Mar 07 '12 at 09:55 AM
transform.position + transform.right * offset1; // for instance, offset1 = -2
transform.position + transform.right;
transform.position + transform.right * offset2; // for instance, offset2 = 2
answered
Mar 07 '12 at 07:03 AM
Berenger
11k
●
12
●
19
●
53
edited
Mar 07 '12 at 03:38 PM
Thanks for your help. I threw in a diagram up top to better show what I'm trying to do. The blue line is the first raycast (mytransform.position) in the script and the left yellow line is the left raycast (myTransform.position, myTransform.right * leftRaycast). I've changed the second raycast to as you suggested but it changes the angle of the raycast it doesn't actually offset it. Or have I done it wrong?
Keep in mind that a ray is composed of two things, an origin and a direction. Saying "the first raycast (mytransform.position)" makes no sens if you don't add the direction (I suppose you imply it's forward there).
So you want to cast a ray forward, to the right and to the left, with an offset on the origine : the the three lines of codes in my answer. the direction don't need to be multiplied, unless you want to invert it. In your case, and respectively with the origines I gave you, directions are transform.left, transform.forward and transform.right.
Thanks for trying to help Berenger, but I couldn't get your solution to work. I ended up using a Vector3 variable to determine the direction which allowed me to set a ".x" offset for the other raycasts. All is gd now :
raycast x1525
position x884
multiple x290
offset x109
asked: Mar 07 '12 at 06:38 AM
Seen: 650 times
Last Updated: Mar 09 '12 at 03:23 PM
Multiple Raycasts
Getting position where raycast goes offscreen
2d Position to 3d Position
Raycast problem
How Can I Adjust My Ray Cast Position Upwards?
Check if position is inside a collider
Shoot A Raycast At A Specific Location?
A Helping Hand With Raycasting
Using RayCast to Calculate Transforms Height
Having an object point towards mouse position in 3d world
EnterpriseSocial Q&A | http://answers.unity3d.com/questions/224511/how-can-i-offset-a-raycast-along-the-transforms-lo.html | CC-MAIN-2013-20 | en | refinedweb |
Hi,
I'm having some issues getting the SD breakout board working consistently with an Arduino Mega ADK, the CardInfo example works on occasion (one of every 30 times or so).
Here's what I've got, that appears to work on occasion:
1. I have what I'm pretty certain is an authentic Arduino Mega ADK board
2. I'm using the latest version of the Arduino IDE /w the /libraries/SD replaced by the Adafruit github version
3. I've formatted the SD card to be Fat 16 using windows, following the directions in the Adafruit tutorial
4. I changed MEGA_SOFT_SPI from 0 to 1 in /SD/Utilities/Sd2Card.h
5. I've updated the pin assignments in that file as well to be:
#else // SOFTWARE_SPI
// define software SPI pins so Mega can use unmodified GPS Shield
/** SPI chip select pin */
uint8_t const SD_CHIP_SELECT_PIN = 53;
/** SPI Master Out Slave In pin */
uint8_t const SPI_MOSI_PIN = 51;
/** SPI Master In Slave Out pin */
uint8_t const SPI_MISO_PIN = 50;
/** SPI Clock pin */
uint8_t const SPI_SCK_PIN = 52;
#endif // SOFTWARE_SPI
And I've followed the wiring instructions on the Adafruit SD card tutorial for a MEGA card as well, and tried re-wiring it a few times to eliminate a connectivity problem.
One out of about every 30 times I try it CardInfo works, and I can hit the reset button on the Arduino and it will work again, I even had the Datalogger example working one time. But it appears random if it will work or not and usually it says it can't initialize the card.
Any ideas?
Thanks,
celer | http://forums.adafruit.com/viewtopic.php?p=142132 | CC-MAIN-2013-20 | en | refinedweb |
Introduction
Recently I was relieved to hear that Silverlight is here to stay<Hefty Sigh of Relief!!>. MVVM is a big part of the Silverlight features in Silverlight 5 (it's good to see that Microsoft is paying close attention to developer feedback). In this article we'll talk about using MVVM architecture for a confirmation dialog box. You probably know that you can do a confirmation dialogs using the following code:
if (System.Windows.Browser.HtmlPage.Window.Confirm("Are you sure?")) { // Do stuff...
}
The problem with this approach is that it lacks flexibility with styling and features. How can we create a Confirmation Dialog in Silverlight with our own window and give it that whole MVVM look and feel?
Let's begin by creating a new Silverlight ChildWindow and editing it in Blend. We'll change it to say yes and no and add a textblock for our message:
Figure 1 - Child Window changed in Blend
Now we'll use MVVM Light to bind EventCommands to both buttons. The commands will execute code for confirmation and denial in the ViewModel. We'll add the references for MVVM Light to our project so it will pick up the EventToCommand in the project and in Blend.
Figure 2 - References for MVVM Light
We can now drag the EventToCommand onto each of our yes and no buttons. Now we'll bind to commands that we'll need to create in our view model. The XAML ends up looking like this:
Listing 1 – XAML for Buttons using EventToCommand
<Button x:
<i:Interaction.Triggers>
<i:EventTrigger
<GalaSoft_MvvmLight_Command:EventToCommand
</i:EventTrigger>
</i:Interaction.Triggers>
</Button>
<Button x:
<i:Interaction.Triggers>
<i:EventTrigger
<GalaSoft_MvvmLight_Command:EventToCommand
</i:EventTrigger>
</i:Interaction.Triggers>
Figure 3 - Choosing EventToCommand from Blend
If we look at our XAML code, we can see the additional EventToCommand markup that was added to each of our buttons. The Command for Yes binds to the ConfirmCommand and for No binds to the DenyCommand. Let's create a ViewModel for our application called ConfirmationDialogViewModel.cs so we can implement these commands. As an aside, I'd like to talk about how I layout my project for Silverlight. I like to organize my solution into separate folders that reflect the whole MVVM concept. I create a View folder for Views, a Model folder for any model behavior, and a ViewModel folder for all my ViewModels binding to the Views. Then I use ReSharper to rename all my namespaces to reflect the appropriate folders so I don't forget anything.
Also let's say a few words about naming convention. When using MVVM, I like to suffix all my View classes with the word View and all my ViewModel classes with the word ViewModel. It just makes it easier to identify their purpose. For our Confirmation Dialog, I named the view ConfirmationDialogView and for our Confirmation Dialog ViewModel I called the class ConfirmationDialogViewModel. ConfirmationDialogView may seem a bit redundant, but there is something to say for being consistent in naming your classes.
My first step in coding is that I'll need to hook up the ViewModel to the DataContext of my dialog. Because we want to pass a parameter into our dialog to tell it the message to display, we'll hook up the ViewModel in code behind. We also want to pass a callback to our confirmation dialog, so after the user clicks okay, the callback can get executed. The reason we need to pass a callback is because the Silverlight world is asynchronous. You can't display ShowDialog and stop Silverlight from going to the next line of code like you can with HtmlPage.Window.Confirm. To be really thorough, we'll pass 2 callbacks: One for confirm and one for deny.
Listing 2 – The Confirmation Dialog code behind
public partial class ConfirmationDialog : ChildWindow
{
public ConfirmationDialog(string confirmationQuestion, Action<object> confirmCallback, Action<object> denyCallback, object confirmPayload, object denyPayload)
{
InitializeComponent();
DataContext = new ConfirmationDialogViewModel(confirmationQuestion, confirmCallback, denyCallback, confirmPayload, denyPayload);
}
private void OkButtonClick(object sender, RoutedEventArgs e)
this.DialogResult = true;
private void CancelButtonClick(object sender, RoutedEventArgs e)
this.DialogResult = false;
}
In our ViewModel, we'll let the RelayCommand bound to EventToCommand on the Yes and No button trigger the confirmation callbacks.
Listing 3 – The ViewModel for the Confirmation Dialog
using System;
using GalaSoft.MvvmLight;
using GalaSoft.MvvmLight.Command;
namespace TestOutDialogConfirmation.ViewModel
{
public class ConfirmationDialogViewModel : ViewModelBase
public RelayCommand ConfirmCommand { get; private set; }
public RelayCommand DenyCommand { get; private set; }
private string _confirmationQuestion;
public string ConfirmationQuestion
get { return _confirmationQuestion; }
set { _confirmationQuestion = value; RaisePropertyChanged("ConfirmationQuestion");}
public Action<object> ConfirmCallback { get; set; }
public Action<object> DenyCallback { get; set; }
public object ConfirmPayload { get; set; }
public object DenyPayload { get; set; }
public ConfirmationDialogViewModel(string confirmationQuestion, Action<object> confirmCallback, Action<object> denyCallback, object confirmPayload, object denyPayload)
ConfirmationQuestion = confirmationQuestion;
ConfirmCallback = confirmCallback ?? (obj =>{obj = obj;}); // null coalesce to do nothing
DenyCallback = denyCallback ?? (obj => { obj = obj; }); // null coalesce to do nothing
ConfirmPayload = confirmPayload;
DenyPayload = denyPayload;
// set up relay commands to trigger the callbacks once a button has been pressed
ConfirmCommand = new RelayCommand(() => ConfirmCallback(ConfirmPayload));
DenyCommand = new RelayCommand(() => DenyCallback(DenyPayload));
}
Admittedly the dialog looks a bit bulky in the constructor. If you want, you can create a constructor that just takes a confirm callback, because 90% of the time, you'll do nothing if they hit the no button. And you may not need to pass a payload to act upon.
Listing 4 – An easier constructor for the confirmation dialog
public ConfirmationDialogViewModel(string confirmationQuestion, Action<object> confirmCallback)
ConfirmCallback = confirmCallback ?? (obj => { obj = obj; }); // null coalesce to do nothing
DenyCallback = (obj => { obj = obj; }); // do nothing
ConfirmCommand = new RelayCommand(() => ConfirmCallback(null));
DenyCommand = new RelayCommand(() => DenyCallback(null));
The following code shows the MVVM Confirm dialog in action from the main page. Pressing a button on the page will bring up the dialog. If the user clicks Yes, they wish to continue, the messagebox will show let's play again through the confirm callback.
Listing 5 – The Confirmation Dialog in Action
using System.Windows;
using System.Windows.Controls;
namespace TestOutDialogConfirmation.View
public partial class MainPage : UserControl
public MainPage()
private void button1_Click(object sender, System.Windows.RoutedEventArgs e)
ConfirmationDialog dlg = new ConfirmationDialog("Do you wish to continue?", o => MessageBox.Show("let's play again"), null, null, null);
dlg.Show();
The results are shown in figure 3 below. Always keep in mind that when using your ConfirmationDialog, all code after dlg.Show will be executed because of the asynchronouse behavior of Silverlight. Therefore, you want to always put any code that you want executed after the user confirms in the callback you pass into the constructor of the Confirmation Dialog. Callbacks give you a way to control the temporal flow of execution inside of a Silverlight application. (Callbacks are especially significant when doing client/server calls from the Silverlight client).
Figure 4 - Results of Hitting a Yes Confirmation
Conclusion
The thing I love about Silverlight is that it gives you the flexibility to create controls the way you wish to behave. Dialogs are an example of a control that you can get to act the way you want. In an asynchronous world, you may wish that the dialog react to some code that is not part of the OnClick event handler inside your window. Callbacks provide a convenient way to tell the dialog what code you want executed once the okay button is pressed. If you are using an MVVM architecture, you probably want your dialog to call code inside your ViewModel. MVVM Light provides a nice mechanism, called EventToCommand, that allows us to trigger code in our ViewModel from the click of a button or the press of a key on the keyboard. For our confirmation dialog, the code that gets executed in the ViewModel is the original code we passed into the constructor of our Dialog. Now that we have confirmed that MVVM is the way to go when writing a dialog, you may want to experiment with MVVM in your next dialog control in Silverlight and .NET.
DataGrid Paging in Silverlight using WCF RIA Services
User Control in Silverlight
While going through any eg of ViewModels.... every viewModel is getting inherited from ViewModelBase, i knw it is required for OnpropertyChanged() in INotifiedPropertyChanged Interface.
Question : Can we downLoad and start working on it.
or
i am not understanding the code written for Onpropertychanged()
please explain me.
tnx | http://www.c-sharpcorner.com/uploadfile/mgold/strategy-for-a-confirmation-dialog-in-an-mvvm-world-in-silverlight/ | CC-MAIN-2013-20 | en | refinedweb |
JSP bean set property
JSP bean set property
... you a code that help in describing an
example from JSP bean set property...:useBean> -
The < jsp:use Bean>
instantiate a bean class
bean object
bean object i have to retrieve data from the database and want to store in a variable using rs.getString and that variable i have to use in dropdown in jsp page.
1)Bean.java:
package form;
import java.sql.*;
import
JSP
access application data stored in JavaBeans components. The jsp expression language allows a page author to access a bean using simple syntax such as $(name). Before JSP 2.0, we could use only a scriptlet, JSP expression, or a custom
Form processing using Bean
Form processing using Bean
In this section, we will create a JSP form using bean ,which will use a class
file for processing. The standard way of handling...;beanformprocess2.jsp" to retrieve the data
from bean..
<jspSP
Use Of Form Bean In JSP
Use Of Form Bean In JSP
... about the
procedure of handling sessions by using Java Bean. This section provides...
or data using session through the Java Bean.
Program Summary:
There are
jsp - JSP-Servlet
/loginbean.shtml
http...://
Connect from database using JSP Bean file
Connect from database using JSP Bean file....
<jsp:useBean id=?bean name?
class=?bean class? scope... that defines the bean.
<jsp:setProperty name = ?id?
property
jsp
jsp how to create a table in oracle using jsp
and the table name is entered in text feild of jsp page code for jsp automatic genration using html
JSP
JSP FILE UPLOAD-DOWNLOAD code USING J
jsp
jsp how to assign javascript varible to java method in jsp without using servlet
Writing Calculator Stateless Session Bean
'
bean.
Writing JSP and Web/Ear component
Our JSP file access the session bean...Writing Calculator Stateless Session Bean... Bean for multiplying the values entered by user. We will use ant
build tool
Using Beans in JSP. A brief introduction to JSP and Java Beans.
JSP
; Hi Friend,
Please visit the following links:
Thanks
Getting a Property value in jsp
GetProperties()
{}
}
In the above example, we are using bean with <jsp... of
accessing properties of bean by using getProperty tag which automatically sends... a Property Value</H1>
<jsp:useBean p>in my project i have following jsp in this jsp the pagesize..." prefix="bean"%></p>
<p><%
Log log = LogFactory.getLog...()%>" class="inactiveFuncLn" target="bodyFrame"><bean:message bundle... an id of some format using the following code.
public class GenerateSerialNumber
jsp
JSP entered name and password is valid HII Im developing a login page using jsp and eclipse,there are two fields username and password,I want...
{
response.sendRedirect("/examples/jsp/login.jsp");
}
}
catch sir i am trying to connect the jsp with oracle connectivity... are using oracle oci driver,you have to use:
Connection connection... are using oracle thin driver,you have to use:
Connection connection,please send me login page code using jsp
1)login.jsp:
<html>
<script>
function validate(){
var username=document.form.user.value;
var password=document.form.pass.value;
if(username==""){
alert
jsp ouestion
jsp ouestion I have 1 report in my project.In that report i have used java bean.I want to make 1 more report by using data of first report. plz help me.....to get data from bean in second report
jsp
jsp I'm attempting to run the program , I got the following error.I am using
Apache Tomcat/5.0.28 , jdk1.6
HTTP Status 500 -
type Exception report
description The server encountered an internal error
Implementing Bean with script let in JSP
Implementing Bean with script let in JSP...;
This application illustrates how to create a bean class and how to
implement it with script let of jsp for inserting the data in mysql table.
In this example we create
JSP -... displays all of the items selected. The selection of items is made using checkboxes
report generation using jsp
report generation using jsp report generation coding using jsp
java bean code - EJB
java bean code simple code for java beans Hi Friend... the Presentation logic. Internally, a bean is just an instance of a class.
Java Bean Code:
public class EmployeeBean{
public int id;
public
using Bean and Servlet In JSP |
Record user login and
logout timing In JSP... in JSP File |
Alphabetical DropDown Menu In JSP |
Using Bean
Counter... to Open JSP
| Add and
Element Using Javascript in JSP |
Java bean
A Java Program by using JSP
A Java Program by using JSP how to draw lines by using JSP plz show me the solution by using program
jsp login page
jsp login page hi tell me how to create a login page using jsp and servlet and not using bean... please tell how to create a database in sql server... please tell with code
Error in using java beans - JSP-Servlet
Error in using java beans I am getting the following error when I run the jsp code.
type Exception report
description The server...: Unable to load class for JSP
library management system jsp project
library management system jsp project i need a project of library management system using jsp/java bean
generate charts using JSP
generate charts using JSP any one know coding for generate bar chart or pie chart using JSP
ScatterPlot using jsp
ScatterPlot using jsp hi,
can anybody provide me code for ScatterPlot using jsp.
thanks
datasource in jsp using struts
datasource in jsp using struts how to get the datasource object in jsp.datasource is configured in struts-config.xml
JSP Examples
Authentication using Bean and Servlet In JSP
Record user login and logout....
This section will help you create web applications using JSP. In this page
you...
in JSP.
Before starting to create example using JSP we should have focus
JSP - Struts
JSP Hi,
Can you please tell me how to load the values in selectbox which are stored in arraylist using struts-taglibs
Note:I am neither using form nor bean...
I want the arraylist values to be dispalyed in selectbox
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://roseindia.net/tutorialhelp/comment/100043 | CC-MAIN-2013-20 | en | refinedweb |
XML::Easy::Tranform - XML processing with a clean interface
The
XML::Easy::Transform:: namespace exists to contain modules that perform transformations on XML documents,
or parts thereof,
in the form of XML::Easy::Element and XML::Easy::Content nodes.
XML::Easy is a collection of modules relating to the processing of XML data.
It includes functions to parse and serialise the standard textual form of XML.
When XML data is not in text form,
XML::Easy processes it in an abstract syntax-neutral form,
as a collection of linked Perl objects.
This in-program data format shields XML users from the infelicities of XML syntax.
Modules under the
XML::Easy::Transform:: namespace operate on XML data in this abstract structured form,
not on textual XML.
A transformation on XML data should normally be presented in the form of a function,
which takes an XML::Easy::Element node as its main parameter,
and returns an XML::Easy::Element node (or
dies on error).
The input node and output node each represent the root element of the XML document (or fragment thereof) being transformed.
These nodes,
of course,
contain subordinate nodes,
according to the structure of the XML data.
A reference to the top node is all that is required to effectively pass the whole document.
CPAN distributions under this namespace are:
Manages XML Namespaces by hoisting all namespace declarations to the root of a document.
Andrew Main (Zefram) <zefram@fysh.org>
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~zefram/XML-Easy-0.009/lib/XML/Easy/Transform.pod | CC-MAIN-2013-20 | en | refinedweb |
ActionScript 3 inheritance: developers beware!
Consider the following C#/ Java code (ignoring package-level considerations, it will compile in either language):
public class BaseClass { public static int i; public static void method() { } } class ChildClass: BaseClass { } class TestClass { public TestClass() { int j = ChildClass.i; ChildClass.method(); } }
With C# and Java, there is nothing spectacular about this code. Sadly though, achieving this functionality in ActionScript requires a significant work-around. If we simply rewrite the above using AS3 syntax, we get the following errors:
public class BaseClass { public static var i:int; public static function method():void { } } public class ChildClass extends BaseClass { } public function TestClass() { var j:int = ChildClass.i; ChildClass.method(); }
and this code won’t compile. Instead we get:
1061: Call to a possibly undefined method method through a reference with static type Class. TestClass.as line 4 Flex Problem 1119: Access of possibly undefined property i through a reference with static type Class. TestClass.as line 3 Flex Problem
As the “Static properties not inherited” section of the language reference explains, AS3 doesn’t support static member inheritance, thus the errors. This limitation can be worked around with a rather ugly bodge. Modify ChildClass to the following:
public class ChildClass extends BaseClass { public static function get i():int { return BaseClass.i; } public static function set i(val:int):void { BaseClass.i = i; } public static function method():void { BaseClass.method(); } }:
public class ChildClass extends BaseClass { private var j:int = i; public function ChildClass() { method(); } }
This code compiles just fine. Even though i and method aren’t accessible via ChildClass for code external to ChildClass, within that class, all parent and other ancestor static members are accessible.Tweet
Share This Post...
6 comments so far, click here to read them or add another
6 Comments so far
Ugly ugly ugly, it’s a mickey mouse compiler
Nice workaround.
It’s interesting to see how Java and C developers think differnet ways.
But IMHO this is good restriction in AS3.
If you have a habit of unit-testing your code, you know that it is bad habit to use statics. Static inheritance smells like bad thinking, after all for what and where do we need so many static classes?
Actually, there is not even an abstract class in AS3, but you can still use any class as abstract if you need one.
Even if some old language is capable of doing something, it doesn’t mean that we should be always able to do that. Comparing languages you should always think wider, than just one detail.
-keep up the good work…
You make a very good point dejavu: statics should be used with care as they do cause all sorts of problems with unit testing and can cause strong coupling in the code. I came across this with static inheritance when investigation how to unit test a PureMVC system, as it uses two strong code smells: statics and singletons. But more on that in a month or so when I finish the work…
you can always do the old BaseClass.method()
@john,
I agree: BaseClass.method() is the best way with AS3. Using BaseClass.method() requires a paradigm shift by those used to using Java, C# etc though.
Statics are very useful, sometimes it can be the most elegant solution to a problem (even if it makes your code a little less modular). It’s hard to take AS3 seriously when it can’t do what is normal in any other mainstream languages. I say this as a guy who makes a living working with flash.
It’s very similar to how you can’t include parameters in an AS3 interface. Doesn’t make any sense. Adobe should look at HaXe, it’s more of a real language and compiles your swf file to run up to 3 times faster. | http://www.davidarno.org/2009/09/25/actionscript-3-inheritance-developers-beware/ | CC-MAIN-2013-20 | en | refinedweb |
If you have any other questions or comments, please let me know. knownNodes currently has 925 nodes for this stream. CPU usage down to 40%. asked 6 years ago viewed 5136 times active 6 years ago Get the weekly newsletter!
remoteCommand 'addr' from 81.98.253.250 addr message contains 1 IP addresses. But I'm not sure about either. Using the cProfile module If you want to know how much time is spent on each function and method, and how many times each of them is called, you can use Except this appears to be by design.
Why didn't Dumbledore appoint the real Mad Eye Moody to teach Defense Against Dark Arts? This is why the unix time utility may be useful, as it is an external Python measure. Contributor fiatflux commented Jun 27, 2013 If I wasn't running around doing travel stuff (restricting my work here to about 5 minutes at a time), I would rig it up with
Hacker used picture upload to get PHP code into my site How does Decommission (and Revolt) work with multiple permanents leaving the battlefield? With the CPU usage this is delivering, I would think it's more likely that the client is to trying to decrypt every message, and not just the ones intended for it. Python's sleep function allows floating point number as argument though, so just put in whatever makes sense (like 0.2 = 200ms)... (CPU will increase, since you paused for a whole second Python Nice remoteCommand 'inv' from 68.113.120.119 Inventory (in memory) has inventory item already.
Back to top Report #13 gibbonweb Posted 04 August 2011 - 04:53 AM gibbonweb 兄ヨハネス Members 2,078 posts Joined: 23-June 10 LocationMunich(DE) Expertise:HTML, CSS, PHP, Javascript, Python, SQL, Graphics Yay, thanks! Python Reduce Cpu Usage knownNodes currently has 925 nodes for this stream. Contributor acejam commented Jun 27, 2013 This CPU usage bug happens for me as soon as I start up the application. news How to help reduce students' anxiety in an oral exam?
Reload to refresh your session. Python Get Cpu Usage Of Process remoteCommand 'addr' from 216.14.37.242 addr message contains 1 IP addresses. remoteCommand 'inv' from 216.14.37.242 Inventory (in memory) has inventory item already. knownNodes currently has 926 nodes for this stream.
For example, let's measure how long it takes to sort an array of 2000000 random numbers: @fn_timer def random_sort(n): return sorted([random.random() for i in range(n)]) if __name__ == "__main__": random_sort(2000000) 1234567 @fn_timerdef The script ALWAYS uses 100% of CPU. Python Increase Cpu Usage. Python Cpu Usage Linux Node.js | Ruby on Rails | Python | PHP | ScalaWhat is necessary to change a person is to change his awareness of himself.
def action_print(): print "hello there" interval = 5 next_run = 0 while True: while next_run > time.time(): pass next_run = time.time() + interval action_print() I would like to avoid putting process navigate here remoteCommand 'inv' from 74.74.103.120 Inventory (in memory) has inventory item already. remoteCommand 'getdata' from 68.113.120.119 received getdata request for item: a1dfdac227c4c6a7b4399add53c19c036ef8e521c15522e3f41c1031686bd80f sending msg Total message processing time: 0.600827932358 seconds. Use a decorator to time your functions The simpler way to time a function is to define a decorator that measures the elapsed time in running the function, and prints the result: Python Use All Cpu Cores
Digital Hardness of Integers Equivalent form of Black-Scholes Equation (to transform to heat equation) more hot questions question feed lang-py about us tour help blog chat data legal privacy policy work Use the memory_profiler module The memory_profiler module is used to measure memory usage in your code, on a line-by-line basis. Not the answer you're looking for? Check This Out Is the binomial theorem actually more efficient than just distributing Is there any way to take stable Long exposure photos without using Tripod?
Does every data type just boil down to nodes with pointers? Python Busy Wait I'm behind a firewall and running with yellow status. I tried to use sleep however anything that is less than a second eats up the cpu. –m1k3y3 Feb 19 '12 at 13:35 Do you want this process to
You signed out in another tab or window. i'll give numpy a chance. The Operating system and version number you are using. Python Profiler Cpu Usage remoteCommand 'inv' from 81.98.253.250 Inventory (in memory) has inventory item already.
remoteCommand 'inv' from 81.98.253.250 Inventory (in memory) has inventory item already. remoteCommand 'inv' from 92.25.250.208 Inventory (in memory) has inventory item already. Right now your program is always doing everything as fast as it can, rather than giving some time for the Pi to rest or do something else. this contact form remoteCommand 'inv' from 46.208.242.157 Inventory (in memory) has inventory item already.
Join them; it only takes a minute: Sign up CPU reaching 100% usage with select in python up vote 0 down vote favorite 2 I have a problem with a select To run the time utility type: $ time -p python timing_functions.py 1 $ time -p python timing_functions.py which gives the output: Total time running random_sort: 1.3931210041 seconds real 1.49 user 1.40 I will look at possibly making your suggested changes some time in the future. But I don't want that since my program will run for long lengths of time, etc.
I don't think it's related to user disconnections, but I could be wrong. remoteCommand 'addr' from 68.113.120.119 addr message contains 1 IP addresses. JamesTheAwesomeDude commented Jun 12, 2013 Same here; 12.04 LTS x64. This is a limitation due to the Python GIL.
And if you didn't install the psutil package, maybe you're still waiting for the results! | http://juicecoms.com/cpu-usage/python-increase-cpu-usage.html | CC-MAIN-2017-51 | en | refinedweb |
Hey guys,
Just needing your help on this code.
#include <stdio.h> int main() { FILE *fin,*fout; char c; fin=fopen("input.img","rb"); fout=fopen("output.img","wb"); while ((c=fgetc(fin))!= EOF) { fputc(c, fout); } return 0; fclose(fin); fclose(fout); }
I'm trying to unscramble a scrambled image and this just outputs nothing. Would you guys be able to help me? Thanks.
Here's part of what I need to do. I don't know how to reverse exclusive OR :
The next random number in the sequence is found from the current number using the following linear congruential generator
n=(n*106+1283)%6075;
where n is the current number. The first number in the sequence is 1 and so the sequence starts:
1,1389,2717,3760,4968,5441,904.....
The image is stored in a file called mystery.img which is exactly 40000 bytes long. Each byte gives the brightness of a point in the image. You can view the file using the program display.exe on the H: drive.
Each byte in the image file has been EXCLUSIVE ORed with the bottom 8 bits of the corresponding number in the sequence so that the image is lost in the randomness. This operation is reversible. | https://www.daniweb.com/programming/software-development/threads/302835/need-help-asap-pls | CC-MAIN-2017-51 | en | refinedweb |
RTSP & QML
Hey !
I am recovering and displaying a video from an IP camera using a QML sourcecode.
I'm using a basic mediaplayer
But here is my problem. For now I am directly linked to my camera, but, in the end, my camera will send the video to a kind of controller. This controller will send me under UDP packet a lot of information, including the camera video.
I also know that, the controller can "recognize" the video, and put an "ID Video" on all UDP packets corresponding to the camera before sending me.
How should I modify my code to focus on these UDP packet and display the video on my computer ?
Here is my first code :
RTSP.qml :
import QtQuick 2.0 import QtQuick.Controls 2.0 import QtQuick.Layouts 1.3 import QtQuick.Controls.Material 2.0 import QtQuick.Controls.Universal 2.0 import Fluid.Controls 1.0 import QtMultimedia 5.5 import QtQuick.Window 2.0 import QtQuick 2.7 import "elements" Tab { //visible: true //visibility: maximised title: "RTSP Stream" SwipeView{ id: swipeview anchors.fill: parent currentIndex: tabBar.currentIndex Page{ MediaPlayer{ id: mediaplayer source: "rtsp://192.168.1.206/axis-media/media.amp" } VideoOutput{ anchors.fill: parent source: mediaplayer } Text{ id:msgPage1 anchors.centerIn: parent font.pointSize: 32 text: qsTr("RTSP Launch") } MouseArea{ id: playArea anchors.fill: parent onClicked: { if(msgPage1.visible){ print("initi stream") msgPage1.visible = false mediaplayer.play() } else{ print("para stream") mediaplayer.stop() msgPage1.visible = true } } } } } }
main.qml :
RowLayout { spacing: 0 anchors.fill: parent LeftStatusStack { id: leftStatusStack anchors.top : parent.top } Rectangle { color: Material.primary Layout.fillHeight: true width:theme.horizontalSepWidth } ... RTSP{ id: rtspTab objectName: "rtspTab" visible: true }
I dont put all the main because I think you dont need it but there is a lot of module (all working)
So to sum up : The video arrive with a lot of other UDP packet, video packet contains a identificator, I just want to display that video, how can I modify my code ? :)
Thanks for help ;)
Hi,
It's a bit vague. What will that controller be ? Won't it just add data to the video stream ?
You make it sound like it's going to interleave information so generate new frames between video frames.
sorry for the time of response
Actually, it will add data yes. Most of the data will be video stream but not all.That's why I need to recover only video data and, as I explain, I know that these video data have an identificator (something like 0x0019 )
What I meant is, is the stream altered or can you still view it on e.g. VLC ?
- Pablo J. Rogina
@Chanchan said in RTSP & QML:
the stream is not altered
somehow contradicts
video arrive with a lot of other UDP packet
please answer @SGaist question regarding what happens if you use i.e. VLC to connect to your server output, can you view the video? I bet not...
no, as I said, I recover a lot of UDP packets, but only one type is interesting for me : video ones. And that I want is to detect these packet and display them in a video module
I didnt try with VLC but I used Gstreamer and yes I can display it
- Pablo J. Rogina
@Chanchan said in RTSP & QML:
I used Gstreamer and yes I can display it
Mmm, strange. So somehow Gstreamer is smart enough to drop the non-video related UDP packets?
So my guess is that device is just adding meta data to your video stream so it should play the same especially if GStreamer is able to show it.
This post is deleted!
sorry for the response time again, ok I will search in this way then, thanks for answering :) | https://forum.qt.io/topic/103343/rtsp-qml | CC-MAIN-2019-30 | en | refinedweb |
This is the implementation of the coupling of an SM gauge boson to a pair of sfermions. More...
#include <SSWSSVertex.h>
This is the implementation of the coupling of an SM gauge boson to a pair of sfermions.
It inherits from VSSVertex and implements the setCoupling() method.
Definition at line 28 of file SSWSSVertex.h.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Definition at line 81 of file SSWSS SSWSS::GeneralVSSVertex.
The static object used to initialize the description of this class.
Indicates that this is a concrete class with persistent data.
Definition at line 108 of file SSWSSVertex.h. | https://herwig.hepforge.org/doxygen/classHerwig_1_1SSWSSVertex.html | CC-MAIN-2019-30 | en | refinedweb |
The crypt module provides an interface to the UNIX crypt() routine that is used to encrypt passwords on many UNIX systems.
crypt(word, salt)
Encrypts word using a modified DES algorithm. salt is a two-character seed used to initialize the algorithm. Returns the encrypted word as a string. Only the first eight characters of word are significant.
The following code reads a password from the user and compares it against the value in the system password database:
import getpass import pwd import crypt uname = getpass.getuser() # Get username from environment pw = getpass.getpass() # Get entered password realpw = pwd.getpwnam(uname)[1] # Get real password entrpw = crypt.crypt(pw,realpw[:2]) # Encrypt if realpw == entrpw: # Compare ...
No credit card required | https://www.oreilly.com/library/view/python-essential-reference/0672328623/0672328623_ch19lev1sec2.html | CC-MAIN-2019-30 | en | refinedweb |
1
4.2.2 Credit appraisal and credit decision-making ............................................. 43
5.3.1 Bank Guidelines for investments in other than Government Securities ......... 59
2
6.2 Para-banking Activities ..................................................................................... 70
3
7.5.2 Filing a Complaint to the Banking Ombudsman ......................................... 85
4
Distribution of weights in the
Commercial Banking in India: A Beginner's Module Curriculum
Note: Candidates are advised to refer to NSE's.
5
CHAPTER 1: Introduction
Banks have played a critical role in the economic development of some developed countries
such as Japan and Germany and most of the emerging economies including India. Banks today
are important not just from the point of view of economic growth, but also financial stability. In
emerging economies, banks are special for three important reasons. were
traditionally out of bounds for them, non-bank intermediaries have begun to perform many of
the functions of banks. Banks thus compete not only among themselves, but also with non-
bank financial intermediaries, and over the years, this competition has only grown in intensity.
Globally, this has forced the banks to introduce innovative products, seek newer sources of
income and diversify into non-traditional activities.
This module provides some basic insights into the policies and practices currently followed in
the Indian banking system. The first two chapters provide an introduction to commercial
banking in India and its structure. Bank deposits are dealt with in detail in Chapter 3, lending
and investments in Chapter 4 & Chapter 5 respectively. Chapter 6 deals with other basic
banking activities of commercial banks, while Chapters 7 and 8 explain the relationship between
a bank and its customers and the trends in modern banking respectively.
1
Each of these functions is described in detail in later chapters.
6
1.2 Evolution of Commercial Banks in India
The commercial banking industry in India started in 1786 with the establishment of the Bank
of Bengal in Calcutta. The Indian Government at the time established three Presidency banks,
viz., the Bank of Bengal (established in 1809), the Bank of Bombay (established in 1840) and
the Bank of Madras (established in 1843). In 1921, the three Presidency banks were
amalgamated to form the Imperial Bank of India, which took up the role of a commercial bank,
a bankers' bank and a banker to the Government. The Imperial Bank of India was established
with mainly European shareholders. It was only with the establishment of Reserve Bank of
India (RBI) as the central bank of the country in 1935, that the quasi-central banking role of
the Imperial Bank of India came to an end.
In 1860, the concept of limited liability was introduced in Indian banking, resulting in the
establishment of joint-stock banks. In 1865, the Allahabad Bank was established with purely
Indian shareholders. Punjab National Bank came into being in 1895. Between 1906 and 1913,
other banks like Bank of India, Central Bank of India, Bank of Baroda, Canara Bank, Indian
Bank, and Bank of Mysore were set up.
After independence, the Government of India started taking steps to encourage the spread of
banking in India. In order to serve the economy in general and the rural sector in particular,
the All India Rural Credit Survey Committee recommended the creation of a state-partnered
and state-sponsored bank taking over the Imperial Bank of India and integrating with it, the
former state-owned and state-associate banks. Accordingly, State Bank of India (SBI) was
constituted in 1955. Subsequently in 1959, the State Bank of India (subsidiary bank) Act was
passed, enabling the SBI to take over eight former state-associate banks as its subsidiaries.
To better align the banking system to the needs of planning and economic policy, it was
considered necessary to have social control over banks. In 1969, 14 of the major private
sector banks were nationalized. This was an important milestone in the history of Indian
banking. This was followed by the nationalisation of another six private banks in 1980. With
the nationalization of these banks, the major segment of the banking sector came under the
control of the Government. The nationalisation. However, this arrangement also
saw some weaknesses like reduced bank profitability, weak capital bases, and banks getting
burdened with large non-performing assets.
To create a strong and competitive banking system, a number of reform measures were initiated
in early 1990s. The thrust of the reforms was on increasing operational efficiency, strengthening
supervision over banks, creating competitive conditions and developing technological and
institutional infrastructure. These measures led to the improvement in the financial health,
soundness and efficiency of the banking system.
7
One important feature of the reforms of the 1990s was that the entry of new private sector
banks was permitted. Following this decision, new banks such as ICICI Bank, HDFC Bank, IDBI
Bank and UTI Bank were set up.
Commercial banks in India have traditionally focused on meeting the short-term financial
needs of industry, trade and agriculture. However, given the increasing sophistication and
diversification of the Indian economy, the range of services extended by commercial banks
has increased significantly, leading to an overlap with the functions performed by other financial
institutions. Further, the share of long-term financing (in total bank financing) to meet capital
goods and project-financing needs of industry has also increased over the years.
The main functions of a commercial bank can be segregated into three main areas: (i) Payment
System (ii) Financial Intermediation (iii) Financial Services.
Banks are at the core of the payments system in an economy. A payment refers to the
means by which financial transactions are settled. A fundamental method by which
banks help in settling the financial transaction process is by issuing and paying cheques
issued on behalf of customers. Further, in modern banking, the payments system also
involves electronic banking, wire transfers, settlement of credit card transactions, etc.
In all such transactions, banks play a critical role.
The second principal function of a bank is to take different types of deposits from
customers and then lend these funds to borrowers, in other words, financial
intermediation. In financial terms, bank deposits represent the banks' liabilities, while
loans disbursed, and investments made by banks are their assets. Bank deposits serve
the useful purpose of addressing the needs of depositors, who want to ensure liquidity,
2
The functions of commercial banks have been described in detail in later chapters.
8
safety as well as returns in the form of interest. On the other hand, bank loans and
investments made by banks play an important function in channelling funds into
profitable as well as socially productive uses.
Banks face competition from a wide range of financial intermediaries in the public and private
sectors in the areas of financial intermediation and financial services (although the payments
system is exclusively for banks). Such intermediaries form a diverse group in terms of size and
nature of their activities, and play an important role in the financial system by not only competing
with banks, but also complementing them in providing a wide range of financial services.
Some of these intermediaries include:
Term-lending institutions
Insurance companies
Mutual funds
Term lending institutions exist at both state and all-India levels. They provide term loans (i.e.,
loans with medium to long-term maturities) to various industry, service and infrastructure
sectors for setting up new projects and for the expansion of existing facilities and thereby
compete with banks. At the all-India level, these institutions are typically specialized, catering
to the needs of specific sectors, which make them competitors to banks in those areas.3 These
include the Export Import Bank of India (EXIM Bank), Small Industries Development Bank of
India (SIDBI), Tourism Finance Corporation of India Limited (TFCI), and Power Finance
Corporation Limited (PFCL).
3
A notable exception is the IFCI Ltd, which lends into a variety of sectors.
9
At the state level, various State Financial Corporations (SFCs) have been set up to finance and
promote small and medium-sized enterprises. There are also State Industrial Development
Corporations (SIDCs), which provide finance primarily to medium-sized and large-sized
enterprises. In addition to SFCs and SIDCs, the North Eastern Development Financial Institution
Ltd. (NEDFI) has been set up to cater specifically to the needs of the north-eastern states.
India has many thousands of non-banking financial companies, predominantly from the private
sector. NBFCs are required to register with RBI in terms of the Reserve Bank of India
(Amendment) Act, 1997. The principal activities of NBFCs include equipment-leasing, hire-
purchase, loan and investment and asset finance. NBFCs have been competing with and
complementing the services of commercial banks for a long time. All NBFCs together currently
account for around nine percent of assets of the total financial system.
Housing-finance companies form a distinct sub-group of the NBFCs. As a result of some recent
government incentives for investing in the housing sector, these companies' business has
grown substantially. Housing Development Finance Corporation Limited (HDFC), which is in
the private sector and the Government-controlled Housing and Urban Development Corporation
Limited (HUDCO) are the two premier housing-finance companies. These companies are major
players in the mortgage business, and provide stiff competition to commercial banks in the
disbursal of housing loans.
Mutual funds offer competition to banks in the area of fund mobilization, in that they offer
alternate routes of investment to households. Most mutual funds are standalone asset
management companies. In addition, a number of banks, both in the private and public
sectors, have sponsored asset management companies to undertake mutual fund business.
Banks have thus entered the asset management business, sometimes on their own and other
times in joint venture with others.
10
CHAPTER 2: Banking Structure in India
Banking Regulator
The Reserve Bank of India (RBI) is the central banking and monetary authority of India, and
also acts as the regulator and supervisor of commercial banks. (see Table 2.1). Our focus in this module
will be only on the scheduled commercial banks. A pictorial representation of the structure of
SCBs in India is given in figure 2.1.
4
Scheduled banks in India are those that are listed in the Second Schedule of the Reserve Bank of India Act, 1934.
RBI includes only those banks in this schedule which satisfy the criteria as laid down vide section 42 (6) (a) of
the Act.
11
Public Sector Banks
Public sector banks are those in which the majority stake is held by the Government of India
(GoI). Public sector banks together make up the largest category in the Indian banking system.
There are currently 27 public sector banks in India. They include the SBI and its 6 associate
banks (such as State Bank of Indore, State Bank of Bikaner and Jaipur etc), 19 nationalised
banks (such as Allahabad Bank, Canara Bank etc) and IDBI Bank Ltd.
Public sector banks have taken the lead role in branch expansion, particularly in the rural
areas. From Table 2.1, it can also be seen that:
Public sector banks account for bulk of the branches in India (88 percent in 2009).
In the rural areas, the presence of the public sector banks is overwhelming; in 2009,
96 percent of the rural bank branches belonged to the public sector. The private sector
banks and foreign banks have limited presence in the rural areas.
Regional Rural Banks (RRBs) were established during 1976-1987 with a view to develop the
rural economy. Each RRB is owned jointly by the Central Government, concerned State
12
Government and a sponsoring public sector commercial bank. RRBs provide credit to small
farmers, artisans, small entrepreneurs and agricultural labourers. Over the years, the
Government has introduced a number of measures of improve viability and profitability of
RRBs, one of them being the amalgamation of the RRBs of the same sponsored bank within a
State. This process of consolidation has resulted in a steep decline in the total number of RRBs
to 86 as on March 31, 2009, as compared to 196 at the end of March 2005.
In this type of banks, the majority of share capital is held by private individuals and corporates.
Not all private sector banks were nationalized in in 1969, and 1980. The private banks which
were not nationalized are collectively known as the old private sector banks and include banks
such as The Jammu and Kashmir Bank Ltd., Lord Krishna Bank Ltd etc.5 Entry of private sector
banks was however prohibited during the post-nationalisation period. In July 1993, as part of
the banking reform process and as a measure to induce competition in the banking sector, RBI
permitted the private sector to enter into the banking system. This resulted in the creation of
a new set of private sector banks, which are collectively known as the new private sector
banks. As at end March, 2009 there were 7 new private sector banks and 15 old private sector
banks operating in India.6
Foreign Banks
Foreign banks have their registered and head offices in a foreign country but operate their
branches in India. The RBI permits these banks to operate either through branches; or through
wholly-owned subsidiaries.7 The primary activity of most foreign banks in India has been in
the corporate segment. However, some of the larger foreign banks have also made consumer-
financing a significant part of their portfolios. These banks offer products such as automobile
finance, home loans, credit cards, household consumer finance etc. Foreign banks in India are
required to adhere to all banking regulations, including priority-sector lending norms as
applicable to domestic banks.8 In addition to the entry of the new private banks in the mid-
90s, the increased presence of foreign banks in India has also contributed to boosting competition
in the banking sector.
5
Some of the existing private sector banks, which showed signs of an eventual default, were merged with state
owned banks.
6.
7
In addition, a foreign institution could also invest up to 74% in domestic private bank, in which up to 49% can be
via portfolio investment.
8
Priority sector lending has been described in a later chapter.
13
Box 2.1: Number of Foreign Banks
At the end of June 2009, there were 32 foreign banks with 293 branches operating in India.
Besides, 43 foreign banks were operating in India through representative offices. Under the
World Trade Organisation co-
operative banks (UCBs), whose operations are either limited to one state or stretch across
states. The rural co-operative banks comprise State co-operative banks, district central co-
operative banks, SCARDBs and PCARDBs.9.10 Owing to their widespread
geographical penetration, cooperative banks have the potential to become an important
instrument for large-scale financial inclusion, provided they are financially strengthened.11
The RBI and the National Agriculture and Rural Development Bank (NABARD) have taken a
number of measures in recent years to improve financial soundness of co-operative banks.
The Reserve Bank of India (RBI) is the central bank of the country.12 It was established on
April 1, 1935 under the Reserve Bank of India Act, 1934, which provides the statutory basis for
9
SCARDB stands for state co-operative agricultural and rural development banks and PCARDB stands for primary
co-operative agricultural and rural development banks.
10
In addition, the rural areas are served by a very large number of primary agricultural credit societies (94,942 at
end-March 2008).
11
Financial Inclusion implies provision of financial services at affordable cost to those who are excluded from the
formal financial system..
14
its functioning.:
Supervises banks
As regards the commercial banks, the RBI's role mainly relates to the last two points stated
above.
As the bankers' bank, RBI holds a part of the cash reserves of banks,; lends the banks funds
for short periods, and provides them with centralised clearing and cheap and quick remittance
facilities.
Banks are supposed to meet their shortfalls of cash from sources other than RBI and approach
RBI only as a matter of last resort, because RBI as the central bank is supposed to function as
only the 'lender).13.
13
These are mainly deposits. NDTL is discussed in Chapter 3 under section 3.3.
15.14 This helps the RBI to perform its role as the banker to the Government,
under which the RBI conducts the Government's market borrowing program.
14
The concept of demand and time liabilities has been explained in Chapter 3.
16
CHAPTER 3: Bank Deposit Accounts
As stated earlier, financial intermediation by commercial banks has played a key role in India
in supporting the economic growth process. An efficient financial intermediation process, as is
well known, has two components: effective mobilization of savings and their allocation to the
most productive uses. In this chapter, we will discuss one part of the financial intermediation
by banks: mobilization of savings. When banks mobilize savings, they do it in the form of
deposits, which are the money accepted by banks from customers to be held under stipulated
Since the first episode of bank nationalization in 1969, banks have been at the core of the
financial intermediation process in India. They have mobilized a sizeable share of savings of
the household sector, the major surplus sector of the economy. This in turn has raised the
financial savings of the household sector and hence the overall savings rate. Notwithstanding
the liberalization of the financial sector and increased competition from various other saving
instruments, bank deposits continue to be the dominant instrument of savings in India.
It can be seen from Table 3.1 that gross domestic savings of the Indian economy have been
growing over the years and the household sector has been the most significant contributor to
savings. Household sector saves in two major ways, viz. financial assets and physical assets.
Table 3.2 shows that within the financial savings of the household sector, bank deposits are the
most prominent instrument, accounting for nearly half of total financial savings of the household
sector.
17
Table 3.2: Financial Savings of the Household Sector (Gross)
One of the most important functions of any commercial bank is to accept deposits from the
public, basically for the purpose of lending. Deposits from the public are the principal sources
of funds for banks.
Table 3.3 provides the share of deposits of different classes of scheduled commercial banks
(SCBs). It can be seen that the public sector banks continue to dominate the Indian banking
industry. However, the share of the new private sector banks has been rising at the expense of
the public sector banks, particularly in the last few years.
18
Table 3.3: Share of Deposits of SCBs-Groupwise
2003 2009
Source: Report on Trend and Progress of Banking in India 2008-09 & 2003-04, RBI
Safety of deposits
At the time of depositing money with the bank, a depositor would want to be certain that his/
her money is safe with the bank and at the same time, wants to earn a reasonable return.
The safety of depositors' funds, therefore, forms a key area of the regulatory framework for
banking. In India, this aspect is taken care of in the Banking Regulation Act, 1949 (BR Act).
The RBI is empowered to issue directives/advices on several aspects regarding the conduct of
deposit accounts from time to time. Further, the establishment of the Deposit Insurance
Corporation in 1962 (against the backdrop of failure of banks) offered protection to bank
depositors, particularly small-account holders. This aspect has been discussed later in the
Chapter.
The process of deregulation of interest rates started in April 1992. Until then, all interest rates
were regulated; that is, they were fixed by the RBI. In other words, banks had no freedom to
fix interest rates on their deposits. With liberalization in the financial system, nearly all the
interest rates have now been deregulated. Now, banks have the freedom to fix their own
deposit rates with only a very few exceptions. The RBI prescribes interest rates only in respect
of savings deposits and NRI deposits, leaving others for individual banks to determine.15
15
Savings deposits and NRI deposits have been described later in this chapter.
19
Deposit policy
The Board of Directors of a bank, along with its top management, formulates policies relating
to the types of deposit the bank should have, rates of interest payable on each type, special
deposit schemes to be introduced, types of customers to be targeted by the bank, etc. Of
course, depending on the changing economic environment, the policy of a bank towards deposit
mobilization, undergoes changes.
The bank deposits can also be classified into (i) demand deposits and (b) time deposits.
(i) Demand deposits are defined as deposits payable on demand through cheque or
otherwise. Demand deposits serve as a medium of exchange, for their ownership can
be transferred from one person to another through cheques and clearing arrangements
provided by banks. They have no fixed term to maturity.
(ii) Time deposits are defined as those deposits which are not payable on demand and
on which cheques cannot be drawn. They have a fixed term to maturity. A certificate
of deposit (CD), for example, is a time deposit (See box 3.2)
CDs can be issued by (i) scheduled commercial banks (SCBs) excluding Regional Rural
Banks (RRBs) and Local Area Banks (LABs); and (ii) select all-India Financial Institutions
that have been permitted by the RBI to raise short-term resources within the umbrella limit
fixed by RBI. Deposit amounts for CDs are a minimum of Rs.1 lakh, and multiples thereof.
Demand and time deposits are two broad categories of deposits. Note that these are only
categories of deposits; there are no deposit accounts available in the banks by the names
'demand deposits' or 'time deposits'. Different deposit accounts offered by a bank, depending
on their characteristics, fall into one of these two categories. There are several deposit accounts
offered by banks in India; but they can be classified into three main categories:
Current account
20
Current account deposits fall entirely under the demand-deposit category and term deposit
account falls entirely under time deposit. Savings bank accounts have both demand-deposit
and time-deposit components. In other words, some parts of savings deposits are considered
demand deposits and the rest as time deposits. We provide below the broad terms and conditions
governing the conduct of current, savings and term-deposit accounts.
A current account is a form of demand-deposit, as the banker is obliged to repay these liabilities
on demand from the customer. Withdrawals from current accounts are allowed any number of
times depending upon the balance in the account or up to a particular agreed amount. Current
deposits are non-interest bearing. Among the three broad categories of deposits--current
account deposit, savings accounts deposit and term deposits--current account deposits account
for the smallest fraction.
A current account is basically a running and actively operated account with very little restriction
on the number and amount of drawings. The primary objective of a current account is to
provide convenient operation facility to the customer, via continuous liquidity.
On account of the high cost of maintaining such accounts, banks do not pay any interest on
such deposits. In addition, many banks insist on customers maintaining minimum balances to
offset the transaction costs involved. If minimum balances are not maintained, these banks
charge the customers a certain amount.
Current accounts can be opened by rich individuals/ partnership firms/ private and limited
companies/ Hindu Undivided Families (HUFs)/ societies/ trusts, etc.
Savings deposits are a form of demand deposits, which RBI (3.5
percent as of January 2010).
Savings bank accounts are used by a large segment of small depositors as they can put their
regular incomes into these accounts, withdraw the money on demand and also earn interest
on the balance left in the account.
The flexibility provided by such a product means that savings bank accounts cannot be opened
by big trading or business firms. Similarly, institutions such as government departments and
bodies, local authorities, etc. cannot open savings bank accounts.
21
Savings account deposits together with current account deposits are called CASA deposits
(See Box 3.2).
From a bank's viewpoint, CASA deposits (Current Account and Savings Account deposits)
are low-cost deposits, as compared to other types of deposits. Current account is non-
interest bearing, while interest payable on savings accounts is very low (currently 3.5 percent).
To be competitive, it is important for banks to garner as much low-cost deposits as possible,
because by doing so banks can control the cost of raising deposits and hence can lend at
more competitive rates. The methods used by banks to mobilize CASA deposits include
offering salary accounts to companies, and encouraging merchants to open current accounts,
and use their cash-management facilities.
Banks with low CASA ratios (CASA deposits as % of total deposits) are more dependent on
term deposits for their funding, and are vulnerable to interest rate shocks in the economy,
besides the lower spread they earn. (As discussed above, banks earn profit on the spread
between their deposit and loans rates.)
The table given below shows that the share of current account and savings account (CASA)
deposits in total deposits is the highest for foreign banks followed by the State Bank Group.
It can also be observed that the share of CASA deposits in total deposits of the scheduled
commercial banks as a whole has been declining. This means that the cost of deposit
mobilization of the commercial banks is rising, which may pose a challenge for the banking
sector in the coming years.
March end
A "Term deposit" is a deposit received by the Bank for a fixed period, after which it can be
withdrawn. Term deposits include deposits such as Fixed Deposits / Reinvestment deposits/
22
Recurring Deposits etc. The term deposits account for the largest share and have remained
within the range of 61% to 67 % of total deposits in the recent years.
Fixed deposits on which a fixed rate of interest is paid at fixed, regular intervals;
Re-investment deposits, under which the interest is compounded quarterly and paid
on maturity, along with the principal amount of the deposit. Some banks have introduced
"flexi" deposits under which, the amount in savings deposit accounts beyond a fixed
limit is automatically converted into term-deposits; and
Recurring deposits, under which.
Banks devise various strategies to expand the customer base and reducing the cost of raising
deposits. This is done by identifying target markets, designing the products as per the
requirements for customers, taking measures for marketing and promoting the deposit products.
It is essential not only to expand the customer base but also to retain it. This is done by
providing counselling, after-sales information and also through prompt handling of customer
complaints.
While the strategies for mobilizing bank deposits vary from bank to bank, one common feature
is to maximize the share of CASA deposits (Box 3.2). The other common features generally
observed are as follows:
Staff members posted at branches are adequately trained to offer efficient and courteous
service to the customers and to educate them about their rights and obligations.
23
A bank often offers personalized banking relationship for its high-value customers by
appointing Customer Relationship Managers (CRMs).
While banks endeavour to provide services to the satisfaction of customers, they put
in place an expeditious mechanism to redress the complaints of the customers.
To open and operate a bank account, the following guidelines need to be followed.
Due Diligence Process: A bank before opening any deposit account has to carry out due
diligence as required under "Know Your Customer" (KYC) guidelines issued by RBI and or such
other norms or procedures adopted by the bank.16 The 'due diligence' process, while opening
a deposit account, involves the bank having adequate knowledge of the person's identity,
occupation, sources of income, and location. Obtaining an introduction of the prospective
depositor from a person acceptable to the bank, obtaining recent photographs of people opening/
operating the account are part of the due diligence process. For customers providing proof of
identification and address, there is no need for personal introduction to the bank for opening
of a new savings bank account. To promote financial inclusion in rural areas / tribal areas, KYC
norms have been relaxed for below the poverty line (BPL) families.
Minimum Balance: For deposit products like a savings bank account or a current account,
banks normally stipulate certain minimum balances to be maintained as part of terms and
conditions governing operation of such accounts. But for people below the poverty line, banks
encourage the opening of 'No-frills Accounts', typically a special savings bank account where
no minimum balance requirement is required. For a savings bank account, the bank may also
place restrictions on number of transactions, cash withdrawals, etc., during a given period.
Transparency: Failure to maintain minimum balance in the accounts, where applicable, will
attract levy of charges as specified by the bank from time to time. Similarly, the bank may
specify charges for issue of cheques books, additional statement of accounts, duplicate passbook,
folio charges, etc. All such details regarding terms and conditions for operation of the accounts
and schedule of charges for various services provided should be communicated to the prospective
depositor while opening the account for the sake of transparency.
16
The KYC guidelines have been described in detail in Chapter 7.
24
Eligibility: A savings bank account can be opened by eligible person(s) and certain
organizations/agencies, as advised by the RBI from time to time. But current accounts can be
opened by individuals, partnership firms, private and public limited companies, Hindu Undivided
Families (HUFs), specified associates, society trusts, etc. Eligibility criteria for a savings account
and a current account are largely similar, but there are important differences too. While both
the accounts can be opened by individuals, the savings account cannot be opened by a firm.
Term Deposit Accounts can be opened by all categories of account holders.
Requirement of PAN: In addition to the due diligence requirements, under KYC norms,
banks are required by law to obtain a Permanent Account Number (PAN) from the prospective
account holder or alternate declarations as specified under the Income Tax Act.
Operation of Joint Account: Deposit accounts can be opened by an individual in his own
name or by more than one individual in their own names (known as a 'joint account').
A joint account can be operated by a single individual or by more than one individual jointly.
The mandate for who can operate the account can be modified with the consent of all account
holders. Joint accounts opened by minors with their parents or guardians can be only operated
by the latter.
Accountholders of a joint account can give mandates on the operation of the account, and the
disposal of balances in the event of the demise of one or more of the holders. Banks classify
these mandates as 'Either or Survivor', and 'Anyone or Survivor(s)', etc.
Power of Attorney: At the request of the depositor, the bank can register mandate/power of
attorney given by him authorizing another person to operate the account on his behalf.
Nomination: A depositor is permitted to officially authorize someone, who would receive the
money of his account when the depositor passes away. This is called the nomination process.
Nomination facility is available on all deposit accounts opened by individuals. Nomination is
also available to a sole proprietary concern account. Nomination can be made in favour of one
individual only. Nomination so made can be cancelled or changed by the account holder/s any
time. Nomination can be made in favour of a minor too.
25
Box 3.3: Operation of Special Classes of Deposit Account Holders
Minors' Accounts
Savings bank accounts can be opened by minors along with their guardians, and operated
solely by the guardians, until the minor attains majority. Verification of signatures and other
identification is repeated before the major starts operating the account.
Banks have developed fixed-deposit schemes specifically meant for senior citizens (i.e.,
individuals over the age of 60 years). Such schemes usually provide an incentive by way of
additional interest, over and above the normal rate of interest, on term-deposits across
various maturities. Such schemes are applicable for both fresh deposits as well as renewals
of maturing deposits.
Customer Information
Customer information collected from the customers should not be used for cross-selling of
services or products by the bank, its subsidiaries and affiliates. If the bank proposes to use
such information, it should be strictly with the 'express consent' of the account-holder.
Banks are not expected to disclose details/particulars of the customer's account to a third
person or party without the expressed or implied consent from the customer. However, there
are some exceptions, such as disclosure of information under compulsion of law or where
there is a duty to public for the bank to disclose.
Interest Payments
Savings bank accounts: Interest is paid on savings bank deposit account at the rate
specified by RBI from time to time. In case of savings bank accounts, till recently,
banks paid interest on the minimum balance between the 11th and the last day of the
26
month. With effect from April 1, 2010, banks have been advised to calculate interest
on savings bank deposit by considering daily product, which would benefit the holders
of savings bank accounts.
Term deposits: Term-deposit interest rates are decided by individual banks within
these general guidelines. In terms of RBI directives, interest is calculated at quarterly
intervals on term deposits and paid at the rate decided by the bank depending upon
the period of deposits. The interest on term deposits is calculated by the bank in
accordance with the formulae and conventions advised by Indian Bank Association.17
Also, a customer can earn interest on a term deposit for a minimum period of 7 days,
as stated earlier.
Tax deducted at source (TDS): The bank has statutory obligation to deduct tax at
source if the total interest paid/payable on all term deposits held by a person exceeds
the amount specified under the Income Tax Act and rules there under. The Bank will
issue a tax deduction certificate (TDS Certificate) for the amount of tax deducted. The
depositor, if entitled to exemption from TDS, can submit a declaration to the bank in
the prescribed format at the beginning of every financial year.
The bank on request from the depositor, at its discretion, may allow withdrawal of term-
deposit before completion of the period of the deposit agreed upon at the time of placing the
deposit. Banks usually charge a penalty for premature withdrawal of deposits. The bank shall
declare their penal interest rates policy for premature withdrawal of term deposit, if any, at the
time of opening of the account.
In case.
The Bank may consider requests of the depositor(s) for loan/overdraft facility against term
deposits duly discharged by the depositor(s) on execution of necessary security documents.
17
The Indian Banks' Association (IBA) is an association of banks from both the public and the private sector and
represents the management of banks.
27
The bank may also consider giving an advance against a deposit standing in the name of
minor. However, a suitable declaration stating that the loan is for the benefit of the minor, is to
be furnished by the depositor-applicant.
a) If the depositor has registered nomination with the bank; the balance outstanding in
the account of the deceased depositor will be transferred/ paid to the nominee after
the bank is satisfied about the identity of the nominee, etc.
b) The above procedure will be followed even in respect of a joint account where nomination
is registered with the bank.
c) In case of joint deposit accounts where joint account holders do not give any mandate
for disposal, when one of the joint account holders dies, the bank is required to make
payment jointly to the legal heirs of the deceased person and the surviving depositor(s).
In these cases, delays may ensue in the production of legal papers by the heirs of the
deceased. However, if the joint account holders had given mandate for disposal of the
balance in the account in the forms such as 'either or survivor', 'former/latter or survivor',
'anyone of survivors or survivor'; etc., the payment will be made as per the mandate.
In such cases, there is no delay in production of legal papers by the heirs of the
deceased.
d) In the absence of nomination, the bank will pay the amount outstanding to all legal
heirs against joint application and on receipt of the necessary documents, including
court order.
The Bank will accept 'stop payment' instructions from the depositors in respect of cheques
issued by them. Charges, as specified, will be recovered.
Dormant Accounts
Accounts which are not operated for a considerable period of time (usually 12/24 months for
savings bank accounts and 6/12 months for current accounts), will be transferred to a separate
dormant/inoperative account status in the interest of the depositor as well as the bank.18 The
depositor will be informed if there are charges that the bank would levy on dormant/inoperative
accounts. Such accounts can be used again on an activation request to the bank.
18
Such a practice is in the interest of the depositor since it avoids the possibility of frauds on the account. It is also
in the interest of the bank as it reduces the servicing costs that the bank would have had to incur if the account
were to remain active.
28 (not
a minor) singly or jointly with another individual(s), HUFs, firms, limited companies, associates,
societies, trusts etc. Nomination facility is available to individual(s) holding the lockers singly
or jointly. In the absence of nomination or mandate for disposal of contents of lockers, with a
view to avoid hardship to common persons, the bank will release the contents of locker to the
legal heirs against indemnity on the lines as applicable to deposit accounts.
Depositors having any complaint/grievance with regard to services rendered by the bank have
a right to approach the authorities designated by the bank for handling customer complaints/
grievances. In case the depositor does not get a response from the bank within one month
after the bank receives his representation /complaint or he is not satisfied with the response
received from the bank, he has a right to approach the Banking Ombudsman appointed
by RBI.19
As per the Foreign Exchange Management Act (FEMA), 1999, an NRI means:
Non-Resident Indian National (i.e. Non-resident Indian holding Indian passport), and
(i) Indian citizens who proceed abroad for employment or for any business or vocation in
circumstances indicating an indefinite period of stay outside India;
20
NRI is defined differently under different acts. For the purpose of bank accounts, FEMA definition holds.
29
(ii) Indian citizens working abroad on assignments with foreign governments, international/
multinational agencies such as the United Nations, the International Monetary Fund,
the World Bank etc.
(iii) Officials of Central and State Governments and Public Sector Undertakings (PSUs)
deputed abroad on assignments with foreign governments, multilateral agencies or
Indian diplomatic missions abroad.
PIO (Persons of Indian Origin) is defined as a citizen of any country other than Bangladesh or
Pakistan, if
b. he or either of his parents or any of his grand parents was a citizen of India; or
In general, NRI is thus a person of Indian nationality or origin, who is resident abroad for
business or employment or vocation, or with the intension of seeking employment or vocation
and the period of stay abroad is uncertain.21
These are Rupee accounts and can be opened by any person resident outside India. Typically,
when a resident becomes non-resident, his domestic Rupee account gets converted into an
NRO account. In other words, it is basically a domestic account of an NRI which help him get
credits which accrue in India, such as rent from property or income from other investments.
New accounts can be opened by sending fresh remittances from abroad. NRO accounts can be
opened only as savings account, current account, recurring deposits and term-deposit accounts.
Regulations on interest rates, tenors etc. are similar to those of domestic accounts. While the
principal of NRO deposits is non-repatriable, current income such as interest earnings on NRO
deposits are repatriable. Further, NRI/PIO may remit an amount, not exceeding US$1million
per financial year, for permissible transactions from these accounts.
The Non-Resident (External) Rupee Account NR(E)RA scheme, also known as the NRE scheme,
was introduced in 1970. This is a rupee account. Any NRI can open an NRE account with funds
remitted to India through a bank abroad.
An NRE rupee account may be opened as current, savings, recurring or term deposit account.
Since this account is maintained in Rupees, the depositor is exposed to exchange risk.
21
Thus, a student going abroad for studies or a tourist going abroad for brief visit is not an NRI.
30
This is a repatriable account (for both interest and principal) and transfer from/to another NRE
account or FCNR (B) account (see below) is also permitted. Local payments can also be freely
made from NRE accounts. NRIs / PIOs have the option to credit the current income to their
NRE accounts, provided income tax has been deducted / provided for. Interest rates on NRE
accounts are determined by the RBI, for both savings and term deposits.
The Foreign Currency Non-Resident Account (Banks) or FCNR(B) accounts scheme was
introduced with effect from May 15, 1993 to replace the then prevailing FCNR(A) scheme
introduced in 1975.
These are foreign currency accounts, which can be opened by NRIs in only designated
currencies: Pound Sterling, US Dollar, Canadian Dollar, Australian Dollar, EURO and
Japanese Yen.
Deposits are in foreign currency and are repaid in the currency of issue. Hence, there
is no exchange risk for the account holder.
Transfer of funds from existing NRE accounts to FCNR(B) accounts and vice- versa, of
the same account holder, is permissible without the prior approval of RBI.
A bank should obtain the prior approval of its Board of Directors for the interest rates that it
will offer on deposits of various maturities, within the ceiling prescribed by RBI.
Table 3.4 compares the different features of the deposit accounts available to the NRIs.
31
Table 3.4 Comparison of Deposit Schemes available to NRIs
Period for fixed For terms not less At the discretion of As applicable to
deposits than 1 year and not the bank resident accounts
more than 5 years.
Note:
* Except for the following: (i) current income, (ii) up to USD 1 million per financial year (April-
March), for any bonafide purpose out of the balances in the account / sale proceeds of assets
in India acquired by way of inheritance / legacy inclusive of assets acquired out of settlement
subject to certain conditions.
32
3.7 Deposit Insurance
Deposit insurance helps sustain public confidence in the banking system through the protection
of depositors, especially small depositors, against loss of deposit to a significant extent. In
India, bank deposits are covered under the insurance scheme offered by Deposit Insurance
and Credit Guarantee Corporation of India (DICGC), which was established with funding from
the Reserve Bank of India. The scheme is subject to certain limits and conditions. DICGC is a
wholly-owned subsidiary of the RBI.
All commercial banks including branches of foreign banks functioning in India, local area banks
and regional rural banks are insured by the DICGC.22
Further, all State, Central and Primary cooperative banks functioning in States/Union Territories
which have amended the local Cooperative Societies Act empowering RBI suitably are insured
by the DICGC. Primary cooperative societies are not insured by the DICGC.
In the event of a bank failure, DICGC protects bank deposits that are payable in India. DICGC
is liable to pay if (a) a bank goes into liquidation or (b) if a bank is amalgamated/ merged with
another bank.
There are two methods of protecting depositors' interest when an insured bank fails:
(i) by transferring business of the failed bank to another sound bank23 (in case of merger or
amalgamation) and (ii) where the DICGC pays insurance proceeds to depositors (insurance
pay-out method).
The DICGC insures all deposits such as savings, fixed, current, recurring, etc. except the
following types of deposits:
22
Primary agricultural credit societies (PACS) are village-level cooperatives that disburse short-term credit. There
are over 95000 such societies in the country.
23
In 2004, Global Trust Bank was merged into Oriental Bank of Commerce, after significant losses from NPAs, and a
three-month Govt. imposed moratorium.
33
Inter-bank deposits;
Deposits of the State Land Development Banks with the State co-operative bank;
Any amount, which has been specifically exempted by the corporation with the previous
approval of RBI.
Each depositor in a bank is insured up to a maximum of Rs100,000 for both principal and
interest amount held by him in the same capacity and same right. For example, if an individual
had a deposit with principal amount of Rs.90,000 plus accrued interest of Rs.7,000, the total
amount insured by the DICGC would be Rs.97,000. If, however, the principal amount were Rs.
99,000 and accrued interest of Rs 6,000, the total amount insured by the DICGC would be Rs
1 lakh.
The deposits kept in different branches of a bank are aggregated for the purpose of insurance
cover and a maximum amount up to Rs 1 lakh is paid. Also, all funds held in the same type of
ownership at the same bank are added together before deposit insurance is determined. If the
funds are in different types of ownership (say as individual, partner of firm, director of company,
etc.) or are deposited into separate banks they would then be separately insured.
Also, note that where a depositor is the sole proprietor and holds deposits in the name of the
proprietary concern as well as in his individual capacity, the two deposits are to be aggregated
and the insurance cover is available up to rupees one lakh maximum.
Deposit insurance premium is borne entirely by the insured bank. Banks are required to pay
the insurance premium for the eligible amount to the DICGC on a semi-annual basis. The cost
of the insurance premium cannot be passed on to the customer.
The premium rates charged by DICGC were raised to Re 0.10 per deposit of Rs.100 with
effect from April 1, 2005. While the premiums received by DICGC during the years 2006-
07, 2007-08 and 2008-09 were Rs.2321 crores, Rs.2844 crores and Rs.3453 crores
respectively, the net claims paid by DICGC during these three years were Rs.323 crores,
Rs.180 crores and Rs.909 crores respectively.
34
Withdrawal of insurance cover
The deposit insurance scheme is compulsory and no bank can withdraw from it. The DICGC, on
the other hand, can withdraw the deposit insurance cover for a bank if it fails to pay the
premium for three consecutive half year periods. In the event of the DICGC withdrawing its
cover from any bank for default in the payment of premium, the public will be notified through
the newspapers.
35
CHAPTER 4: Basics of Bank Lending
Banks extend credit to different categories of borrowers for a wide variety of purposes. For
many borrowers, bank credit is the easiest to access at reasonable interest rates. Bank credit
is provided to households, retail traders, small and medium enterprises (SMEs), corporates,
the Government undertakings etc. in the economy.
Retail banking loans are accessed by consumers of goods and services for financing the purchase
of consumer durables, housing or even for day-to-day consumption. In contrast, the need for
capital investment, and day-to-day operations of private corporates and the Government
undertakings are met through wholesale lending.
Loans for capital expenditure are usually extended with medium and long-term maturities,
while day-to-day finance requirements are provided through short-term credit (working capital
loans). Meeting the financing needs of the agriculture sector is also an important role that
Indian banks play.
To lend, banks depend largely on deposits from the public. Banks act as custodian of public
deposits. Since the depositors require safety and security of their deposits, want to withdraw
deposits whenever they need and also adequate return, bank lending must necessarily be
based on principles that reflect these concerns of the depositors. These principles include:
safety, liquidity, profitability, and risk diversion.
Safety
Banks need to ensure that advances are safe and money lent out by them will come back.
Since the repayment of loans depends on the borrowers' capacity to pay, the banker must be
satisfied before lending that the business for which money is sought is a sound one. In addition,
bankers many times insist on security against the loan, which they fall back on if things go
wrong for the business. The security must be adequate, readily marketable and free of
encumbrances.
Liquidity
To maintain liquidity, banks have to ensure that money lent out by them is not locked up for
long time by designing the loan maturity period appropriately. Further, money must come
back as per the repayment schedule. If loans become excessively illiquid, it may not be
possible for bankers to meet their obligations vis--vis depositors.
36
Profitability
To remain viable, a bank must earn adequate profit on its investment. This calls for adequate
margin between deposit rates and lending rates. In this respect, appropriate fixing of interest
rates on both advances and deposits is critical. Unless interest rates are competitively fixed
and margins are adequate, banks may lose customers to their competitors and become
unprofitable.
Risk diversification
To mitigate risk, banks should lend to a diversified customer base. Diversification should be in
terms of geographic location, nature of business etc. If, for example, all the borrowers of a
bank are concentrated in one region and that region gets affected by a natural disaster, the
bank's profitability can be seriously affected.
Based on the general principles of lending stated above, the Credit Policy Committee (CPC) of
individual banks prepares the basic credit policy of the Bank, which has to be approved by the
Bank's Board of Directors. The loan policy outlines lending guidelines and establishes operating
procedures in all aspects of credit management including standards for presentation of credit
proposals, financial covenants, rating standards and benchmarks, delegation of credit approving
powers, prudential limits on large credit exposures, asset concentrations, portfolio management,
loan review mechanism, risk monitoring and evaluation, pricing of loans, provisioning for bad
debts, regulatory/ legal compliance etc. The lending guidelines reflect the specific bank's lending
strategy (both at the macro level and individual borrower level) and have to be in conformity
with RBI guidelines. The loan policy typically lays down lending guidelines in the following
areas:
Hurdle ratings
Loan pricing
Collateral security
A bank can lend out only a certain proportion of its deposits, since some part of deposits have
to be statutorily maintained as Cash Reserve Ratio (CRR) deposits, and an additional part has
to be used for making investment in prescribed securities (Statutory Liquidity Ratio or SLR
37
requirement).24 It may be noted that these are minimum requirements. Banks have the option
of having more cash reserves than CRR requirement and invest more in SLR securities than
they are required to. Further, banks also have the option to invest in non-SLR securities.
Therefore, the CPC has to lay down the quantum of credit that can be granted by the bank as
a percentage of deposits available. Currently, the average CD ratio of the entire banking
industry is around 70 percent, though it differs across banks. It is rarely observed that banks
lend out of their borrowings.
The CPC aims at a targeted portfolio mix keeping in view both risk and return. Toward this end,
it lays down guidelines on choosing the preferred areas of lending (such as sunrise sectors and
profitable sectors) as well as the sectors to avoid.25 Banks typically monitor all major sectors
of the economy. They target a portfolio mix in the light of forecasts for growth and profitability
for each sector. If a bank perceives economic weakness in a sector, it would restrict new
exposures to that segment and similarly, growing and profitable sectors of the economy prompt
banks to increase new exposures to those sectors. This entails active portfolio management.
Further, the bank also has to decide which sectors to avoid. For example, the CPC of a bank
may be of the view that the bank is already overextended in a particular industry and no more
loans should be provided in that sector. It may also like to avoid certain kinds of loans keeping
in mind general credit discipline, say loans for speculative purposes, unsecured loans, etc.
Hurdle ratings
There are a number of diverse risk factors associated with borrowers. Banks should have a
comprehensive risk rating system that serves as a single point indicator of diverse risk factors
of a borrower. This helps taking credit decisions in a consistent manner. To facilitate this, a
substantial degree of standardisation is required in ratings across borrowers. The risk rating
system should be so designed as to reveal the overall risk of lending. For new borrowers, a
bank usually lays down guidelines regarding minimum rating to be achieved by the borrower
to become eligible for the loan. This is also known as the 'hurdle rating' criterion to be achieved
by a new borrower.
24
Each bank has to statutorily set aside a certain minimum fraction of it net demand and time liabilities in prescribed
assets to fulfill these requirements. CRR and SLR have been discussed in chapter 1 and Chapter 5 respectively.
25
For example, in the last decade, a number of banks identified retail finance as an area with potential for strong
growth and have therefore sought to increase their financing in retail space. One advantage of financing a large
number of small loans is that risk concentration is reduced. However, during an economic downturn, the retail
portfolio may also experience significantly high credit defaults.
38
Pricing of loans
Risk-return trade-off is a fundamental aspect of risk management. Borrowers with weak financial
position and, hence, placed in higher risk category are provided credit facilities at a higher
price (that is, at higher interest). The higher the credit risk of a borrower the higher would be
his cost of borrowing. To price credit risks, banks devise appropriate systems, which usually
allow flexibility for revising the price (risk premium) due to changes in rating. In other words,
if the risk rating of a borrower deteriorates, his cost of borrowing should rise and vice versa.
At the macro level, loan pricing for a bank is dependent upon a number of its cost factors such
as cost of raising resources, cost of administration and overheads, cost of reserve assets like
CRR and SLR, cost of maintaining capital, percentage of bad debt, etc. Loan pricing is also
dependent upon competition.
Collateral security
As part of a prudent lending policy, banks usually advance loans against some security. The
loan policy provides guidelines for this. In the case of term loans and working capital assets,
banks take as 'primary security' the property or goods against which loans are granted.26 In
addition to this, banks often ask for additional security or 'collateral security' in the form of
both physical and financial assets to further bind the borrower. This reduces the risk for the
bank. Sometimes, loans are extended as 'clean loans' for which only personal guarantee of the
borrower is taken.
The credit policy of a bank should be conformant with RBI guidelines; some of the important
guidelines of the RBI relating to bank credit are discussed below.
The RBI lays down guidelines regarding minimum advances to be made for priority sector
advances, export credit finance, etc.27 These guidelines need to be kept in mind while formulating
credit policies for the Bank.
Capital adequacy
26
For example, in case of a home loan, the house for which the loan is taken serves as the 'primary security'.
27
Priority sector advances and export credit have been discussed later in this chapter.
39
backed up by. This is so, because bank capital provides a cushion against unexpected losses of
banks and riskier assets would require larger amounts of capital to act as cushion.
The Basel Committee for Bank Supervision (BCBS) has prescribed a set of norms for the
capital requirement for the banks for all countries to follow. These norms ensure that capital
should be adequate to absorb unexpected losses.28 In addition, all countries, including India,
establish their own guidelines for risk based capital framework known as Capital Adequacy
Norms. These norms have to be at least as stringent as the norms set by the Basel committee.
A key norm of the Basel committee is the Capital Adequacy Ratio (CAR), also known as Capital
Risk Weighted Assets Ratio, is a simple measure of the soundness of a bank. The ratio is the
capital with the bank as a percentage of its risk-weighted assets. Given the level of capital
available with an individual bank, this ratio determines the maximum extent to which the bank
can lend.
The Basel committee specifies a CAR of at least 8% for banks. This means that the capital
funds of a bank must be at least 8 percent of the bank's risk weighted assets. In India, the
RBI has specified a minimum of 9%, which is more stringent than the international norm.
In fact, the actual ratio of all scheduled commercial banks (SCBs) in India stood at 13.2% in
March 2009.
The RBI also provides guidelines about how much risk weights banks should assign to different
classes of assets (such as loans). The riskier the asset class, the higher would be the risk
weight. Thus, the real estate assets, for example, are given very high risk weights.
This regulatory requirement that each individual bank has to maintain a minimum level of
capital, which is commensurate with the risk profile of the bank's assets, plays a critical role in
the safety and soundness of individual banks and the banking system.
28
A Bank typically faces two types of losses in respect of any borrower or borrower class - expected and unexpected
losses. Expected losses should be budgeted for, and provisions should be made to offset their adverse effects on
the Bank's balance sheet. However, to cushion against unexpected losses, which are unpredictable, banks have to
hold adequate amount of capital.
40
exposures to NBFCs and to related entities are also in place. Table 4.1 gives a summary of the
RBI's guidelines on exposure norms for commercial banks in India.
Exposure to Limit
41
Group Borrowers: A bank's exposure to a group of companies under the same
management control must not exceed 40% of the Bank's capital funds unless the
exposure is in respect of an infrastructure project. In that case, the exposure to a
group of companies under the same management control may be up to 50% of the
Bank's capital funds.29
In addition to ensuring compliance with the above guidelines laid down by RBI, a Bank may fix
its own credit exposure limits for mitigating credit risk. The bank may, for example, set upper
caps on exposures to sensitive sectors like commodity sector, real estate sector and capital
markets. Banks also may lay down guidelines regarding exposure limits to unsecured loans.
Lending Rates
Banks are free to determine their own lending rates on all kinds of advances except a few such
as export finance; interest rates on these exceptional categories of advances are regulated by
the RBI.
It may be noted that the Section 21A of the BR Act provides that the rate of interest charged
by a bank shall not be reopened by any court on the ground that the rate of interest charged
is excessive.
The concept of benchmark prime lending rate (BPLR) was however introduced in November
2003 for pricing of loans by commercial banks with the objective of enhancing transparency in
the pricing of their loan products. Each bank must declare its benchmark prime lending rate
(BPLR) as approved by its Board of Directors. A bank's BPLR is the interest rate to be charged
to its best clients; that is, clients with the lowest credit risk. Each bank is also required to
indicate the maximum spread over the BPLR for various credit exposures.
However, BPLR lost its relevance over time as a meaningful reference rate, as the bulk of loans
were advanced below BPLR. Further, this also impedes the smooth transmission of monetary
signals by the RBI. The RBI therefore set up a Working Group on Benchmark Prime Lending
Rate (BPLR) in June 2009 to go into the issues relating to the concept of BPLR and suggest
measures to make credit pricing more transparent.
29
Banks may, in exceptional circumstances, with the approval of their boards, enhance the exposure by additional
5% for both individual, and group borrowers
42
Following the recommendations of the Group, the Reserve Bank has issued guidelines in February
2010. According to these guidelines, the 'Base Rate system' will replace the BPLR system with
effect from July 01, 2010.All categories of loans should henceforth be priced only with reference
to the Base Rate. Each bank will decide its own Base Rate. The actual lending rates charged to
borrowers would be the Base Rate plus borrower-specific charges, which will include product-
specific operating costs, credit risk premium and tenor premium.
Since transparency in the pricing of loans is a key objective, banks are required to exhibit the
information on their Base Rate at all branches and also on their websites. Changes in the Base
Rate should also be conveyed to the general public from time to time through appropriate
channels. Apart from transparency, banks should ensure that interest rates charged to customers
in the above arrangement are non-discriminatory in nature.
RBI has been encouraging banks to introduce a fair practices code for bank loans. Loan
application forms in respect of all categories of loans irrespective of the amount of loan sought
by the borrower should be comprehensive. It should include information about the fees/ charges,
if any, payable for processing the loan, the amount of such fees refundable in the case of non-
acceptance of application, prepayment options and any other matter which affects the interest
of the borrower, so that a meaningful comparison with the fees charged by other banks can be
made and informed decision can be taken by the borrower. Further, the banks must inform
'all-in-cost' to the customer to enable him to compare the rates charged with other sources
of finance.
The provisions of the Banking Regulation Act, 1949 (BR Act) govern the making of loans by
banks in India. RBI issues directions covering the loan activities of banks. Some of the major
guidelines of RBI, which are now in effect, are as follows:
Advances against bank's own shares: a bank cannot grant any loans and advances
against the security of its own shares.
Advances to bank's Directors: The BR Act lays down the restrictions on loans and
advances to the directors and the firms in which they hold substantial interest.
43
4.2 Basics of Loan Appraisal, Credit decision-making and Review
The Bank's Board of Directors also has to approve the delegation structure of the various
credit approval authorities. Banks establish multi-tier credit approval authorities for corporate
banking activities, small enterprises, retail credit, agricultural credit, etc.
Concurrently, each bank should set up a Credit Risk Management Department (CMRD), being
independent of the CPC. The CRMD should enforce and monitor compliance of the risk parameters
and prudential limits set up by the CPC.
Credit approving authority: multi-tier credit approving system with a proper scheme
of delegation of powers.
In some banks, high valued credit proposals are cleared through a Credit Committee
approach consisting of, say 3/ 4 officers. The Credit Committee should invariably have
a representative from the CRMD, who has no volume or profit targets.
When a loan proposal comes to the bank, the banker has to decide how much funds does the
proposal really require for it to be a viable project and what are the credentials of those who
are seeking the project. In checking the credentials of the potential borrowers, Credit Information
Bureaus play an important role (see Box)
The Parliament of India has enacted the Credit Information Companies (Regulation) Act,
2005, pursuant to which every credit institution, including a bank, has to become a member
of a credit information bureau and furnish to it such credit information as may be required of
the credit institution about persons who enjoy a credit relationship with it. Credit information
bureaus are thus repositories of information, which contains the credit history of commercial
and individual borrowers. They provide this information to their Members in the form of
credit information reports.
To get a complete picture of the payment history of a credit applicant, credit grantors must
be able to gain access to the applicant's complete credit record that may be spread over
different institutions. Credit information bureaus collect commercial and consumer credit-
related data and collate such data to create credit reports, which they distribute to their
Members. A Credit Information Report (CIR) is a factual record of a borrower's credit payment
history compiled from information received from different credit grantors. Its purpose is to
help credit grantors make informed lending decisions - quickly and objectively. As of today,
bureaus provide history of credit card holders and SMEs.
44
4.2.3 Monitoring and Review of Loan Portfolio
It is not only important for banks to follow due processes at the time of sanctioning and
disbursing loans, it is equally important to monitor the loan portfolio on a continuous basis.
Banks need to constantly keep a check on the overall quality of the portfolio. They have to
ensure that the borrower utilizes the funds for the purpose for which it is sanctioned and
complies with the terms and conditions of sanction. Further, they monitor individual borrowal
accounts and check to see whether borrowers in different industrial sectors are facing difficulty
in making loan repayment. Information technology has become an important tool for efficient
handling of the above functions including decision support systems and data bases. Such a
surveillance and monitoring approach helps to mitigate credit risk of the portfolio.
Banks have set up Loan Review Departments or Credit Audit Departments in order to ensure
compliance with extant sanction and post-sanction processes and procedures laid down by the
Bank from time to time. This is especially applicable for the larger advances. The Loan Review
Department helps a bank to improve the quality of the credit portfolio by detecting early
warning signals, suggesting remedial measures and providing the top management with
information on credit administration, including the credit sanction process, risk evaluation and
post-sanction follow up.
Advances can be broadly classified into: fund-based lending and non-fund based lending.
Fund based lending: This is a direct form of lending in which a loan with an actual cash
outflow is given to the borrower by the Bank. In most cases, such a loan is backed by primary
and/or collateral security. The loan can be to provide for financing capital goods and/or working
capital requirements.
Non-fund based lending: In this type of facility, the Bank makes no funds outlay. However,
such arrangements may be converted to fund-based advances if the client fails to fulfill the
terms of his contract with the counterparty. Such facilities are known as contingent liabilities
of the bank. Facilities such as 'letters of credit' and 'guarantees' fall under the category of non-
fund based credit.
Let us explain with an example how guarantees work. A company takes a term loan from Bank
A and obtains a guarantee from Bank B for its loan from Bank A, for which he pays a fee. By
issuing a bank guarantee, the guarantor bank (Bank B) undertakes to repay Bank A, if the
company fails to meet its primary responsibility of repaying Bank A.
In this chapter, we will discuss only some important types of fund-based lending.
45
4.3.1 Working Capital Finance
Working capital finance is utilized for operating purposes, resulting in creation of current assets
(such as inventories and receivables). This is in contrast to term loans which are utilized for
establishing or expanding a manufacturing unit by the acquisition of fixed assets.
Banks carry out a detailed analysis of borrowers' working capital requirements. Credit limits
are established in accordance with the process approved by the board of directors. The limits
on Working capital facilities are primarily secured by inventories and receivables (chargeable
current assets).
Working capital finance consists mainly of cash credit facilities, short term loan and bill
discounting. Under the cash credit facility, a line of credit is provided up to a pre-established
amount based on the borrower's projected level of sales inventories, receivables and cash
deficits. Up to this pre-established amount, disbursements are made based on the actual level
of inventories and receivables. Here the borrower is expected to buy inventory on payments
and, thereafter, seek reimbursement from the Bank. In reality, this may not happen. The
facility is generally given for a period of up to 12 months and is extended after a review of the
credit limit. For clients facing difficulties, the review may be made after a shorter period.
One problem faced by banks while extending cash credit facilities, is that customers can draw
up to a maximum level or the approved credit limit, but may decide not to. Because of this,
liquidity management becomes difficult for a bank in the case of cash credit facility. RBI has
been trying to mitigate this problem by encouraging the Indian corporate sector to avail of
working capital finance in two ways: a short-term loan component and a cash credit component.
The loan component would be fully drawn, while the cash credit component would vary depending
upon the borrower's requirements.
According to RBI guidelines, in the case of borrowers enjoying working capital credit limits of
Rs. 10 crores and above from the banking system, the loan component should normally be
80% and cash credit component 20 %. Banks, however, have the freedom to change the
composition of working capital finance by increasing the cash credit component beyond 20%
or reducing it below 20 %, as the case may be, if they so desire.
Bill discounting facility involves the financing of short-term trade receivables through negotiable
instruments. These negotiable instruments can then be discounted with other banks, if required,
providing financing banks with liquidity.
Project finance business consists mainly of extending medium-term and long-term rupee and
foreign currency loans to the manufacturing and infrastructure sectors. Banks also provide
financing by way of investment in marketable instruments such as fixed rate and floating rate
46
debentures. Lending banks usually insist on having a first charge on the fixed assets of the
borrower.
During the recent years, the larger banks are increasingly becoming involved in financing
large projects, including infrastructure projects. Given the large amounts of financing involved,
banks need to have a strong framework for project appraisal. The adopted framework will
need to emphasize proper identification of projects, optimal allocation and mitigation of risks.
The project finance approval process entails a detailed evaluation of technical, commercial,
financial and management factors and the project sponsor's financial strength and experience.
As part of the appraisal process, a risk matrix is generated, which identifies each of the project
risks, mitigating factors and risk allocation.
Project finance extended by banks is generally fully secured and has full recourse to the
borrower company. In most project finance cases, banks have a first lien on all the fixed assets
and a second lien on all the current assets of the borrower company. In addition, guarantees
may be taken from sponsors/ promoters of the company. Should the borrower company fail to
repay on time, the lending bank can have full recourse to the sponsors/ promoters of the
company. (Full recourse means that the lender can claim the entire unpaid amount from the
sponsors / promoters of the company.) However, while financing very large projects, only
partial recourse to the sponsors/ promoters may be available to the lending banks.
A substantial quantum of loans is granted by banks to small and medium enterprises (SMEs).
While granting credit facilities to smaller units, banks often use a cluster-based approach,
which encourages financing of small enterprises that have a homogeneous profile such as
leather manufacturing units, chemical units, or even export oriented units. For assessing the
credit risk of individual units, banks use the credit scoring models.
As per RBI guidelines, banks should use simplified credit appraisal methods for assessment of
bank finance for the smaller units. Further, banks have also been advised that they should not
insist on collateral security for loans up to Rs.10 lakh for the micro enterprises.
Given the importance of SME sector, the RBI has initiated several measures to increase the
flow of credit to this segment. As part of this effort, the public sector banks (PSBs) have
been operationalizing specialized SME bank branches for ensuring uninterrupted credit flow
to this sector. As at end-March 2009, PSBs have operationalised as many as 869 specialized
SME bank branches.
47
Small Industries Development Bank of India (SIDBI) also facilitates the flow of credit at
reasonable interest rates to the SME sector. This is done by incentivising banks and State
Finance Corporations to lend to SMEs by refinancing a specified percentage of incremental
lending to SMEs, besides providing direct finance along with banks.
The rural and agricultural loan portfolio of banks comprises loans to farmers, small and medium
enterprises in rural areas, dealers and vendors linked to these entities and even corporates.
For farmers, banks extend term loans for equipments used in farming, including tractors,
pump sets, etc. Banks also extend crop loan facility to farmers. In agricultural financing, banks
prefer an 'area based' approach; for example, by financing farmers in an adopted village. The
regional rural banks (RRBs) have a special place in ensuring adequate credit flow to agriculture
and the rural sector.
The concept of 'Lead Bank Scheme (LBS)' was first mooted by the Gadgil Study Group, which
submitted its report in October 1969. Pursuant to the recommendations of the Gadgil Study
Group and those of the Nariman Committee, which suggested the adoption of 'area approach'
in evolving credit plans and programmes for development of banking and the credit structure,
the LBS was introduced by the RBI in December, 1969. The scheme envisages allotment of
districts to individual banks to enable them to assume leadership in bringing about banking
developments in their respective districts. More recently, a High Level Committee was constituted
by the RBI in November 2007, to review the LBS and improve its effectiveness, with a focus on
financial inclusion and recent developments in the banking sector. The Committee has
recommended several steps to further improve the working of LBS. The importance of the role
of State Governments for supporting banks in increasing banking business in rural areas has
been emphasized by the Committee.
The RBI requires banks to deploy a certain minimum amount of their credit in certain identified
sectors of the economy. This is called directed lending. Such directed lending comprises priority
sector lending and export credit.
The objective of priority sector lending program is to ensure that adequate credit flows into
some of the vulnerable sectors of the economy, which may not be attractive for the banks from
the point of view of profitability. These sectors include agriculture, small scale enterprises,
retail trade, etc. Small housing loans, loans to individuals for pursuing education, loans to
weaker sections of the society etc also qualify as priority sector loans.
To ensure banks channelize a part of their credit to these sectors, the RBI has set guidelines
48
defining targets for lending to priority sector as whole and in certain cases, sub-targets for
lending to individual priority sectors (See Table 4.2).
Export credit Export credit is not a part of 12 per cent of ANBC or CEOBSE,
priority sector for domestic whichever is higher
commercial banks
Source: Master Circular on Lending to Priority Sector dated July 1, 2009, Reserve Bank of
India
Note: ANBC: Adjusted Net Bank Credit
CEOBSE: Credit Equivalent of Off-Balance Sheet Exposure
The RBI guidelines require banks to lend at least 40% of Adjusted Net Bank Credit (ANBC) or
credit equivalent amount of Off-Balance Sheet Exposure (CEOBSE), whichever is higher. In
case of foreign banks, the target for priority sector advances is 32% of ANBC or CEOBSE,
whichever is higher.
In addition to these limits for overall priority sector lending, the RBI sets sub-limits for certain
sub-sectors within the priority sector such as agriculture. Banks are required to comply with
the priority sector lending requirements at the end of each financial year. A bank having
shortfall in lending to priority sector lending target or sub-target shall be required to make
contribution to the Rural Infrastructure Development Fund (RIDF) established with NABARD or
funds with other financial institutions as specified by the RBI.
49
Box 4.3: Differential Rate of Interest (DRI) Scheme
Government of India had formulated in March, 1972 a scheme for extending financial
assistance at concessional rate of interest @ 4% to selected low income groups for productive
endeavors. The scheme known as Differential Rate of Interest Scheme (DRI) is now being
implemented by all Scheduled Commercial Banks. The maximum family incomes that qualify
a borrower for the DRI scheme is revised periodically. Currently, the RBI has advised the
banks that borrowers with annual family income of Rs.18,000 in rural areas and Rs.24,000
in urban and semi-urban areas would be eligible to avail of the facility as against the earlier
annual income criteria of Rs.6,400 in rural areas and Rs.7,200 in urban areas. The target for
lending under the DRI scheme in a year is maintained at one per cent of the total advances
outstanding as at the end of the previous year.
B. Export Credit
As part of directed lending, RBI requires banks to make loans to exporters at concessional
rates of interest. Export credit is provided for pre-shipment and post-shipment requirements
of exporter borrowers in rupees and foreign currencies. At the end of any fiscal year, 12.0% of
a bank's credit is required to be in the form of export credit. This requirement is in addition to
the priority sector lending requirement but credits extended to exporters that are small scale
industries or small businesses may also meet part of the priority sector lending requirement.
Banks, today, offer a range of retail asset products, including home loans, automobile loans,
personal loans (for marriage, medical expenses etc), credit cards, consumer loans (such as TV
sets, personal computers etc) and, loans against time deposits and loans against shares.
Banks also may fund dealers who sell automobiles, two wheelers, consumer durables and
commercial vehicles. The share of retail credit in total loans and advances was 21.3% at end-
March 2009.
Customers for retail loans are typically middle and high-income, salaried or self-employed
individuals, and, in some cases, proprietorship and partnership firms. Except for personal
loans and credit through credit cards, banks stipulate that (a) a certain percentage of the cost
of the asset (such as a home or a TV set) sought to be financed by the loan, to be borne by the
borrower and (b) that the loans are secured by the asset financed.
Many banks have implemented a credit-scoring program, which is an automated credit approval
system that assigns a credit score to each applicant based on certain attributes like income,
educational background and age. The credit score then forms the basis of loan evaluation.
External agencies such as field investigation agencies and credit processing agencies may be
50
used to facilitate a comprehensive due diligence process including visits to offices and homes
in the case of loans to individual borrowers. Before disbursements are made, the credit officer
checks a centralized delinquent database and reviews the borrower's profile. In making credit
decisions, banks draw upon reports from agencies such as the Credit Information Bureau
(India) Limited (CIBIL).
Some private sector banks use direct marketing associates as well as their own branch network
and employees for marketing retail credit products. However, credit approval authority lies
only with the bank's credit officers.
Two important categories of retail loans--home finance and personal loans--are discussed
below.
Home Finance: Banks extend home finance loans, either directly or through home finance
subsidiaries. Such long term housing loans are provided to individuals and corporations and
also given as construction finance to builders. The loans are secured by a mortgage of the
property financed. These loans are extended for maturities generally ranging from five to
fifteen years and a large proportion of these loans are at floating rates of interest. This reduces
the interest rate risk that banks assume, since a bank's sources of finance are generally of
shorter maturity. However, fixed rate loans may also be provided; usually with banks keeping
a higher margin over benchmark rates in order to compensate for higher interest rate risk.
Equated monthly installments are fixed for repayment of loans depending upon the income
and age of the borrower(s).
Personal Loans: These are often unsecured loans provided to customers who use these
funds for various purposes such as higher education, medical expenses, social events and
holidays. Sometimes collateral security in the form of physical and financial assets may be
available for securing the personal loan. Portfolio of personal loans also includes micro-banking
loans, which are relatively small value loans extended to lower income customers in urban and
rural areas.
Indian corporates raise foreign currency loans from banks based in India as well as abroad as
per guidelines issued by RBI/ Government of India. Banks raise funds abroad for on-lending to
Indian corporates. Further, banks based in India have an access to deposits placed by Non
Resident Indians (NRIs) in the form of FCNR (B) deposits, which can be used by banks in India
for on-lending to Indian customers.
51
4.4 Management of Non Performing Assets
An asset of a bank (such as a loan given by the bank) turns into a non-performing asset (NPA)
when it ceases to generate regular income such as interest etc for the bank. In other words,
when a bank which lends a loan does not get back its principal and interest on time, the loan
is said to have turned into an NPA. Definition of NPAs is given in 4.4.1. While NPAs are a
natural fall-out of undertaking banking business and hence cannot be completely avoided,
high levels of NPAs can severely erode the bank's profits, its capital and ultimately its ability to
lend further funds to potential borrowers. Similarly, at the macro level, a high level of non-
performing assets means choking off credit to potential borrowers, thus lowering capital
formation and economic activity. So the challenge is to keep the growth of NPAs under control.
Clearly, it is important to have a robust appraisal of loans, which can reduce the chances of
loan turning into an NPA. Also, once a loan starts facing difficulties, it is important for the bank
to take remedial action.
The gross non-performing assets of the banking segment were Rs. 68, 973 crores at the end
of March 2009, and the level of net NPAs (after provisioning) was Rs.31, 424 crores. Although
they appear to be very large amounts in absolute terms, they are actually quite small in
comparison to total loans by banks. The ratio of gross non-performing loans to gross total
loans has fallen sharply over the last decade and is at 2.3 per cent as at end-March 2009.
This ratio, which is an indicator of soundness of banks, is comparable with most of the
developed countries such as France, Germany and Japan. The low level of gross NPAs as a
percent of gross loans in India is a positive indicator of the Indian banking system.
Source: Report on Trend and Progress of Banking in India 2008-09, RBI and Report on
Currency and Finance 2006-08.
Banks have to classify their assets as performing and non-performing in accordance with RBI's
guidelines. Under these guidelines, an asset is classified as non-performing if any amount of
interest or principal instalments remains overdue for more than 90 days, in respect of term
loans. In respect of overdraft or cash credit, an asset is classified as non-performing if the
account remains out of order for a period of 90 days and in respect of bills purchased and
discounted account, if the bill remains overdue for a period of more than 90 days.
All assets do not perform uniformly. In some cases, assets perform very well and the recovery
of principal and interest happen on time, while in other cases, there may be delays in recovery
or no recovery at all because of one reason or the other. Similarly, an asset may exhibit good
quality performance at one point of time and poor performance at some other point of time.
According to the RBI guidelines, banks must classify their assets on an on-going basis into the
following four categories:
52
Standard assets: Standard assets service their interest and principal instalments on time;
although they occasionally default up to a period of 90 days. Standard assets are also called
performing assets. They yield regular interest to the banks and return the due principal on
time and thereby help the banks earn profit and recycle the repaid part of the loans for further
lending. The other three categories (sub-standard assets, doubtful assets and loss assets) are
NPAs and are discussed below.
Sub-standard assets: Sub-standard assets are those assets which have remained NPAs
(that is, if any amount of interest or principal instalments remains overdue for more than 90
days) for a period up to 12 months.
Doubtful assets: An asset becomes doubtful if it remains a sub-standard asset for a period of
12 months and recovery of bank dues is of doubtful.
Loss assets: Loss assets comprise assets where a loss has been identified by the bank or the
RBI. These are generally considered uncollectible. Their realizable value is so low that their
continuance as bankable assets is not warranted..
RBI has separate guidelines for restructured loans. instalment
or interest amount are eligible to be upgraded to the standard category only after a specified
period.
To create an institutional mechanism for the restructuring of corporate debt, RBI has devised
a Corporate Debt Restructuring (CDR) system. The objective of this framework is to ensure a
timely and transparent mechanism for the restructuring of corporate debts of viable entities
facing problems.
If rehabilitation of debt through restructuring is not possible, banks themselves make efforts
to recover. For example, banks set up special asset recovery branches which concentrate on
53
recovery of bad debts. Private and foreign banks often have a collections unit structured along
various product lines and geographical locations, to manage bad loans. Very often, banks
engage external recovery agents to collect past due debt, who make phone calls to the customers
or make visits to them. For making debt recovery, banks lay down their policy and procedure
in conformity with RBI directives on recovery of debt.
The past due debt collection policy of banks generally emphasizes on the following at the time
of recovery:
Respect to customers
In difficult cases, banks have the option of taking recourse to filing cases in courts, Lok Adalats,
Debt Recovery Tribunals (DRTs), One Time Settlement (OTS) schemes, etc. DRTs have been
established under the Recovery of Debts due to Banks and Financial Institutions Act, 1993 for
expeditious adjudication and recovery of debts that are owed to banks and financial institutions.
Accounts with loan amount of Rs. 10 lakhs and above are eligible for being referred to DRTs.
OTS schemes and Lok Adalats are especially useful to NPAs in smaller loans in different segments,
such as small and marginal farmers, small loan borrowers and SME entrepreneurs.
If a bank is unable to recover the amounts due within a reasonable period, the bank may write
off the loan. However, even in these cases, efforts should continue to make recoveries.
Banks utilize the Securitisation and Reconstruction of Financial Assets and Enforcement of
Security Interest Act, 2002 (SARFAESI) as an effective tool for NPA recovery. It is possible
where non-performing assets are backed by securities charged to the Bank by way of
hypothecation or mortgage or assignment. Upon loan default, banks can seize the securities
(except agricultural land) without intervention of the court.30
30
SARFAESI is effective only for secured loans where bank can enforce the underlying security eg hypothecation ,
pledge and mortgages. In such cases, court intervention is not necessary, unless the security is invalid or fraudu-
lent. However, if the asset in question is an unsecured asset, the bank would have to move the court to file civil
case against the defaulters.
54:
The SARFAESI Act also provides for the establishment of asset reconstruction companies.
55
CHAPTER 5: Bank Investments
In addition to loans and advances, which were discussed in Chapter 4, banks deploy a part of
their resources in the form of investment in securities/ financial instruments. The bulk of a
bank's assets are held either in the form of (a) loans and advances and (b) investments.
Investments form a significant portion of a bank's assets, next only to loans and advances,
and are an important source of overall income. Commercial banks' investments are of three
broad types: (a) Government securities, (b) other approved securities and (c) other securities.
These three are also categorised into SLR (Statutory Liquidity Ratio) investment and non-SLR
investments. SLR investments comprise Government and other approved securities, while
non-SLR investments consist of 'other securities' which comprise commercial papers, shares,
bonds and debentures issued by the corporate sector.
Under the SLR requirement, banks are required to invest a prescribed minimum of their net
demand and time liabilities (NDTL) in Government- and other approved securities under the
BR act, 1949. (Note that SLR is prescribed in terms of banks' liabilities and not assets). This
provision amounts to 'directed investment', as the law directs banks to invest a certain minimum
part of their NDTL in specific securities. While the SLR provision reduces a bank's flexibility to
determine its asset mix, it helps the Government finance its fiscal deficit.31
31
The Government finances its fiscal deficit (broadly, government expenditure minus government revenue) by bor-
rowing, in other words, through the issue of Government securities. Because of the legal provision mandating
56
It is the RBI that lays down guidelines regarding investments in SLR and non-SLR securities.
Bank investments are handled by banks through their respective Treasury Department. This
chapter discusses banks' investment policy and operational details and guidelines relating to
investments.
The Investment Policy outlines general instructions and safeguards necessary to ensure that
operations in securities are conducted in accordance with sound and acceptable business
practices. The parameters on which the policy is based are return (target return as determined
in individual cases), duration (target duration of the portfolio), liquidity consideration and risk.
Thus, while the Policy remains within the framework of the RBI guidelines with respect to bank
investment, it also takes into consideration certain bank-specific factors, viz., the bank's liquidity
condition and its ability to take credit risk, interest rate risk and market risk. The policy is
determined for SLR and non-SLR securities, separately.
The Investment Policy provides guidelines with respect to investment instruments, maturity
mix of investment portfolio, exposure ceilings, minimum rating of bonds/ debentures, trading
policy, accounting standards, valuation of securities and income recognition norms, audit review
and reporting and provisions for Non-Performing Investments (NPI). It also outlines functions
of front office/ back office/ mid office, delegation of financial powers as a part of expeditious
decision-making process in treasury operations, handling of asset liability management (ALM)
issues, etc.
Several banks follow the practice of a strategy paper. Based on the market environment
envisaged by Asset Liability Committee (ALCO) in the Asset Liability Management (ALM) Policy,
a Strategy Paper on investments and expected yield is usually prepared which is placed before
the CEO of the Bank. A review of the Strategy Paper may be done at, say half yearly basis and
put up to the CEO.
banks to invest a minimum fraction of their NDTL in government securities, banks are captive financiers of the
Government's fiscal deficit. Of course, banks are not the only subscribers of government securities.
57
5.2 Statutory Reserve Requirements
5.2.1 Maintenance of Statutory Liquidity Ratio (SLR)
Banks' investments in Central and State Government dated securities including treasury bills
are governed by the RBI guidelines regarding maintenance of minimum level of SLR securities
as well as their own approved policy.
As stated earlier, under the Banking Regulation Act, 1949, the RBI prescribes the minimum
SLR level for Scheduled Commercial Banks (SCBs) in India in specified assets as a percentage
of the bank's NDTL. The actual percentage (that is, the value of such assets of an SCB as a
percentage of its NDTL) must not be less than such stipulated percentage. The RBI may
change the stipulated percentage from time to time.
Over the years, this ratio (SLR ratio) has changed a lot, but has broadly moved on a downward
trajectory, from the of 38.5% of NDTL in the early 90's (September 1990) to 25% by October
1997, with the financial sector reforms giving banks greater flexibility to determine their
respective asset mix. The SLR was further reduced to 24 percent of NDTL in November 2008,
but has been raised back to 25 percent level since October 2009. Currently, it is at 25 percent
level.
Banks can and do invest more than the legally prescribed minimum in SLR, as can be seen
from Table 5.1.
2006 31.3 25
2007 27.9 25
2008 27.8 25
2009 28.1 24
The RBI has prescribed that all SCBs should maintain their SLR in the following instruments
which will be referred to as "statutory liquidity ratio (SLR) securities":
iii. Dated securities of the Government of India issued from time to time under the market
borrowing programme and the Market Stabilisation Scheme;
iv. State Development Loans (SDLs) of the State Governments issued from time to time
under their market borrowing programme; and
58
v. Any other instrument as may be notified by RBI.
The composition of investment by commercial banks is given in the following Table. It can be
seen that SLR investments, particularly government securities, form the bulk of total securities.
Non-SLR investments form a relatively small part of banks' total investment.
(Rupees crore)
Note: The figures in bracket show the investments as a percent to the total investment.
5.2.2 Penalties
If a banking company fails to maintain the required amount of SLR securities on any given day,
it shall be liable to pay to the RBI.32
32 The Bank Rate is determined by the RBI from time to time. It is the rate at which the RBI lends to the banks, and
should not be confused with the repo rate, which is the lending rate the RBI uses in the daily repo (repurchase)
markets.
59
5.3 Non-SLR Investments
If there is any proposal to invest or disinvest in non-SLR securities, the concerned officials
must refer these proposals to the Investment Committee of the bank. Upon vetting and clearance
by the Investment Committee, financial sanction should be obtained from the appropriate
authority in terms of the Scheme of Delegation of Financial Powers.
Strategic Investments
PSU Bonds
Corporate Investments
Mutual Funds
However, as per RBI guidelines, the investments (SLR as well as Non-SLR) will be disclosed in
the balance sheet of the Bank as per the six-category classification listed below:
a. Government securities,
c. Shares,
According to the RBI, banks desirous of investing in equity shares/ debentures should observe
the following guidelines:
ii. Formulate a transparent policy and procedure for investment in shares, etc., with the
approval of the Board; and
60
iii. The decision in regard to direct investment in shares, convertible bonds and debentures
should be taken by the Investment Committee set up by the bank's Board. The
Investment Committee should also be held accountable for the investments made by
the bank.
Further, with the approval of respective Boards,. Accordingly, the Boards of banks lay down policy and prudential limits on investments
in various categories, as stated earlier.
Investment proposals should be subjected to the same degree of credit risk analysis as any
loan proposal. Banks should have their own internal credit analysis and ratings even in respect
of issues rated by external agencies and should not entirely rely on the ratings of external
agencies. The appraisal should be more stringent in respect of investments in instruments
issued by non-borrower customers.
As a matter of prudence, banks should stipulate entry-level minimum ratings/ quality standards
and industry-wise, maturity-wise, duration-wise and issuer-wise limits to mitigate the adverse
impacts of concentration of investment and the risk of illiquidity.
Statutory prescriptions relating to the investment portfolio are to be complied with. The
investments have to be within the specific and general prudential limits fixed by RBI and in
conformity with the provisions of the BR Act and other applicable laws and guidelines that are
issued by the regulators like RBI, Securities Exchange Board of India (SEBI), etc.
For example, banks should not invest in non-SLR securities of original maturity of less than
one-year, other than Commercial Paper and Certificates of Deposits, which are covered under
RBI guidelines.
Further, according to RBI guidelines, a bank's investment in unlisted non-SLR securities should
not exceed 10 per cent of its total investment in non-SLR securities as on March 31, of the
previous year, and such investment should comply with the disclosure requirements as prescribed
by Securities Exchange Board of India (SEBI) for listed companies. Bank's investment in unlisted
non-SLR securities may exceed the limit of 10 per cent, by an additional 10 per cent, provided
the investment is on account of investment in securitised papers issued for infrastructure
projects, and bonds/ debentures issued by Securitisation Companies (SCs) and Reconstruction
Companies (RCs) set up under the SARFAESI Act and registered with RBI.
61
A bank may also decide to put in place additional quantitative ceilings on aggregate non-SLR
investments as a percentage of the bank's net worth (equity plus reserves). There are also
restrictions regarding exposure to a particular industry.
Rating requirements
Banks must not invest in unrated non-SLR securities. However, the banks may invest in unrated
bonds of companies engaged in infrastructure activities, within the ceiling of 10 per cent for
unlisted non-SLR securities as mentioned earlier. Furthermore, the debt securities shall carry
a credit rating of not less than investment grade from a credit rating agency registered with
the SEBI.
The aggregate exposure of a bank to the capital markets in all forms (both fund based and
non-fund based) should not exceed 40% of its net worth as on March 31 of the previous year.
Within this overall ceiling, the bank's direct investment in shares, convertible bonds/ debentures,
units of equity-oriented mutual funds and all exposures to venture capital funds (both registered
and unregistered) should not exceed 20 per cent of its net worth.
The above-mentioned ceilings are the maximum permissible and a bank's Board of Directors is
free to adopt a lower ceiling for the Bank, keeping in view its overall risk profile and corporate
strategy. Banks are required to adhere to the ceilings on an ongoing basis.
No bank can hold shares, as a pledgee, mortgagee or absolute owner in any company other
than a subsidiary, exceeding 30 per cent of the paid up share capital of that company or 30 per
cent of its own paid-up share capital and reserves, whichever is less.
HTM includes securities acquired with the intention of being held up to maturity; HFT includes
securities acquired with the intention of being traded to take advantage of the short-term
33
HTM must not be more than 25% of the portfolio.
62
price/ interest rate movements; and AFS refers to those securities not included in HTM and
HFT. Banks should decide the category of investment at the time of acquisition.
Profit or loss on the sale of investments in both HFT and AFS categories is taken in the income
statement. Shifting of investments from / to HTM may be done with the approval of the Board
of Directors once a year, normally at the beginning of the accounting year. Similarly, shifting of
investments from AFS to HFT may be done with the approval of the Board of Directors, the
ALCO or the Investment Committee. Shifting from HFT to AFS is generally not permitted..
HTM securities are not marked to market and are carried at acquisition cost or at an amortised
cost if acquired at a premium over the face value. (In the case of HTM securities, if the
acquisition cost is more than the face value, premium should be amortised or written off over
the period remaining to maturity.) AFS and HFT securities are valued at market or fair value as
at the balance sheet date.
Valuation of investments is to be done as per guidelines issued by RBI from time to time.
However, banks may adopt a more conservative approach as a measure of prudence.
The 'market value' for the purpose of periodic valuation of investments included in the AFS and
HFT would be the market price of the scrip as available from the trades/ quotes on the stock
exchanges, price list of RBI and prices declared by Primary Dealers Association of India (PDAI)
jointly with the Fixed Income Money Market and Derivatives Association of India (FIMMDA)
periodically. In respect of unquoted securities, RBI has laid down the detailed procedure to be
adopted. For example, banks should value the unquoted Central Government securities on the
basis of the prices/ yield to maturity (YTM) rates put out by the PDAI/ FIMMDA at periodic
intervals.
Investment transactions are undertaken as per the approved investment policy of the bank
and in accordance with the trading policy and Manual of Instructions. With a view to synergising
strengths, a bank usually operates an integrated treasury department under which both domestic
and forex treasuries are brought under a unified command structure. The following distinction
is important to note. The task of domestic treasury operations is predominantly to make
investments on their own account, while the task of forex treasury is to predominantly conduct
operations on behalf of clients. The discussions so far on SLR and non-SLR operations fall
under domestic treasury management.
63
5.4.2 Forex Treasury Management
undertakes cover operation for merchant transactions, trades in inter-bank forex market,
manages foreign currency funds like Foreign Currency Non Resident (FCNR) Accounts,
Exchange Earners Foreign Currency (EEFC) Accounts, etc. and maintains Nostro
34
Accounts .
While bulk of the forex treasury operations are on behalf of the clients, the banks also handle
proprietary trading, i.e., forex trading on the banks' own account.
One important safeguard that banks are required to take is to make a clear separation between
their transactions on their own account and those on behalf of their clients.
34
FCNR accounts are maintained by NRIs, while EEFC accounts are maintained by exporters; a Nostro account is an
account a bank holds with another bank in a foreign country, usually in the currency of that foreign country
64
CHAPTER 6: Other Activities of Commercial
Banks
As a continuation of their main deposit taking and lending activities, banks pursue certain
activities to offer a number of services to customers. They can be put into two broad categories:
(a) Other basic banking activities and (b) Para-banking activities. The former includes provision
of remittance facilities including issuance of drafts, mail transfers and telegraphic transfers,
issuance of travellers' cheques & gift cheques, locker facility etc. Banks also undertake various
para-banking activities including investment banking services, selling mutual funds, selling
insurance products, offering depository services, wealth management services, brokerage,
etc. While the services offered assist the banks to attract more depositors and borrowers, they
also manage to increase their income in the process. Banks earn fees by offering these services
to the customers, as opposed to interest income earned from the lending activities.
Banks undertake foreign exchange transactions for their customers. The foreign exchange
contracts arise out of spot (current) and forward foreign exchange transactions entered into
with corporate and non-corporate customers and counter-party banks for the purpose of hedging
and trading. Banks derive income from the spread or difference between buying and selling
rates of foreign exchange.
Leading banks provide customer specific products and services which cater to risk hedging
needs of corporates at domestic and international locations, arising out of currency and interest
rate fluctuations. These include products such as options and swaps, which are derived from
the foreign exchange market or the interest rate market. These are tailor made products
designed to meet specific risk hedging requirements of the customer.
In addition to the direct foreign exchange related income on buying and selling of foreign
exchange, income is generated in related services while undertaking the main foreign exchange
business. These services include the establishment of letters of credit, issuance of guarantees,
document collection services, etc.
Some banks, including leading public sector banks, private sector banks and local branches of
foreign banks earn significant income from foreign exchange related transactions.
Banks offer various types of services to government departments including direct and indirect
tax collections, remittance facilities, payments of salaries and pensions, etc. Banks also
65
undertake other related businesses like distribution of Government and RBI bonds and handling
public provident fund accounts. Government departments pay fees to banks for undertaking
this business. Banks such as State of Bank of India, with a wide network of branches, are able
to earn significant income by offering these services to government departments.
As stated in Chapter 1, in any economy, banks are at the core of the payment and settlement
systems, which constitute a very important part of the commercial banks' functions. The
payment and settlement systems, as a mechanism, facilitate transfer of value between a
payer and a beneficiary by which the payer discharges the payment obligations to the beneficiary.
The payment and settlement systems enable two-way flow of payments in exchange of goods
and services in the economy. This mechanism is used by individuals, banks, companies,
governments, etc. to make payments to one another.
The RBI has the power to regulate the payment system under the provisions of the Payment
and Settlement Systems (PSS) Act 2007, and the Payment and Settlement Systems Regulations
2008.35 The Board for Regulation and Supervision of Payment and Settlement Systems (BPSS)
is a sub-committee of the Central Board of RBI and is the highest policy making body on the
payment system. The Board is assisted by a technical committee called National Payments
Council (NPC) with eminent experts in the field as members.
There are two types of payments: paper based and electronic. Payments can be made in India
in paper based forms (in the forms of cash, cheque, demand drafts), and electronic forms
(giving electronic instructions to the banker who will make such a payment on behalf of his
customers; credit card; debit card).
The primary paper based payment instrument is the cheque. The process of cheque payment
starts when a payer gives his personal cheque to the beneficiary. To get the actual payment of
funds, the receiver of the cheque has to deposit the cheque in his bank account. If the beneficiary the means of a 'clearing house'.
35
Aside from the regulatory role, the RBI, as the central bank of the country, has been playing a developmental role
in this important area and has taken several initiatives for a secure and efficient payment system.
66
A clearing house is an association of banks that facilitates payments through cheques between
different bank branches within a city/ place. It acts as a central meeting place for bankers to
exchange the cheques drawn on one another and to claim funds for the same. Such operations
are called 'clearing operations'. Generally one bank is appointed as in-charge of the clearing
operations. In the four metros and a few other major cities, however, RBI is looking after the
operations of the clearing house.
The paper based clearing systems comprise: (a) MICR Clearing, (b) Non- MICR clearing and
(c) High Value clearing. MICR stands for Magnetic Ink Character Recognition (MICR). MICR is
a technology for processing cheques. This is done through information contained in the bottom
strip of the cheque where the cheque number, city code, bank code and branch code are given.
Generally, if a cheque is to be paid within the same city (local cheque), it takes 2-3 days for the
money to come to the beneficiary's account. In case of High Value Clearing, however, which is
available only in some large cities, cheque clearing cycle is completed on the same day and the
customer depositing the cheque is permitted to utilise the proceeds the next day morning.36
The introduction of 'Speed Clearing' in June 2008 for collection of outstation cheques has
significantly brought down the time taken for realisation of outstation cheques from 10-14
days; now the funds are available to customers on T+1 (transaction day + 1 day) or T+2
(transaction day + 2 days) basis.
Cheque Truncation
Cheque Truncation is a system of cheque clearing and settlement between banks based on
electronic data/ images or both without physical exchange of instrument. Cheque truncation
has several advantages. First, the bank customers would get their cheques realised faster, as
T+0 (local clearing) and T+1 (inter-city clearing) is possible in Cheque Truncation System
(CTS). Second, faster realisation is accompanied by a reduction in costs for the customers and
the banks. Third, it is also possible for banks to offer innovative products and services based
on CTS. Finally, the banks have the additional advantage of reduced reconciliation and clearing
frauds.
36
To encourage customers to move from paper-based systems to electronic systems, which are more secure, faster
and less costly, the banks were advised in April 2009 to increase the threshold amount of cheque eligible to be
presented in High Value Clearing from Rs.1 lakh to Rs.10 lakhs and gradually discontinue the scheme in an
undisruptive manner over a period of the next one year. However, the facility of MICR/ Non-MICR clearing will
continue to be available for paper-based instruments.
67
Electronic Payment Systems
Payments can be made between two or more parties by means of electronic instructions
without the use of cheques. Generally, the electronic payment systems are faster and safer
than paper based systems. Different forms of electronic payment systems are listed below.37
Real Time Gross Settlement (RTGS) system, introduced in India in March 2004, is a system
through which electronic instructions can be given by banks to transfer funds from their account
to the account of another bank. The RTGS system is maintained and operated by RBI and
provides a means of efficient and faster funds transfer among banks facilitating their financial
operations. As the name suggests, funds transfer between banks takes place on a 'real time'
basis. Therefore, money can reach the beneficiary instantaneously. The system which was
operationalised with respect to settlement of transactions relating to inter-bank payments was
extended to customer transactions later. Though the system is primarily designed for large
value payments, bank customers have the choice of availing of the RTGS facility for their time-
critical low value payments as well. More than 60,000 branches as at end-September 2009 are
participants in the RTGS.
Electronic Funds Transfer (EFT) is a system whereby anyone who wants to make payment to
another person/ company etc. can approach his bank and make cash payment or give
instructions/ authorisation to transfer funds directly from his own account to the bank account
of the receiver/ beneficiary. RBI is the service provider for EFT. The electronic payment systems
comprise large value payment systems as well as retail payment mechanisms.
In addition, there are some electronic payment systems which are exclusively for retail
payments. The retail payment system comprises Electronic Clearing Services (ECS), National
Electronic Funds Transfer (NEFT) and card based payment systems including ATM network.
Electronic Clearing Service (ECS) is a retail payment system that can be used to make bulk
payments/ receipts of a similar nature especially where each individual payment is of a repetitive
nature and of relatively smaller amount. This facility is meant for companies and government
departments to make/ receive large volumes of payments rather than for funds transfers by
individuals. The ECS facility is available at a large number of centres. The ECS is further
divided into two types - ECS (Credit) to make bulk payments to individuals/ vendors and ECS
(Debit) to receive bulk utility payments from individuals.
37
In addition to these systems, some banks in India have begun to offer certain banking services through the
Internet that facilitate transfer of funds electronically.
68
Under ECS (Credit), one entity/ company would make payments from its bank account to a
number of recipients by direct credit to their bank accounts. For instance, companies make
use of ECS (Credit) to make periodic dividend/ interest payments to their investors. Similarly,
employers like banks, government departments, etc make monthly salary payments to their
employees through ECS (Credit). Payments of repetitive nature to be made to vendors can
also be made through this mode.
The payments are affected through a sponsor bank of the Company making the payment and
such bank has to ensure that there are enough funds in its accounts on the settlement day to
offset the total amount for which the payment is being made for that particular settlement.
The sponsor bank is generally the bank with whom the company maintains its account.
ECS (Debit) is mostly used by utility companies like telephone companies, electricity companies
etc. to receive the bill payments directly from the bank accounts of their customers. Instead of
making electricity bill payment through cash or by means of cheque, a consumer (individuals
as well as companies) can opt to make bill payments directly into the account of the electricity
provider/ company/ board from his own bank account. For this purpose, the consumer has to
give an application to the utility company (provided the company has opted for the ECS (Debit)
scheme), providing details of the bank account from which the monthly/ bi-monthly bill amount
can be directly deducted. Thereafter, the utility company would advise the consumer's bank to
debit the bill amount to his account on the due date of the bill and transfer the amount to the
company's own account. This is done by crediting the account of the sponsor bank, which
again is generally the bank with whom the company receiving the payments, maintains the
account. The actual bill would be sent to the consumer as usual at his address.
The settlement cycle under the ECS has been reduced to T+1 day from earlier T+3 days in
which as many as 114 banks with 30,780 branches participated as at the end of September,
2009.
National Electronic Funds Transfer (NEFT) system, introduced in November 2005, is a nationwide
funds transfer system to facilitate transfer of funds from any bank branch to any other bank
branch. The beneficiary gets the credit on the same day or the next day depending on the time
of settlement. Ninety one banks with over 61,000 branches participated in NEFT as at end of
September 2009.
The banks generally charge some processing fees for electronic fund transfers, just as in the
case of other services such as demand drafts, pay orders etc. The actual charges depend upon
the amount and the banker-customer relationship. In a bid to encourage customers to move
69
from paper-based systems to electronic systems, RBI has rationalised and made transparent
the charges the banks could levy on customers for electronic transactions. RBI on its part has
extended the waiver of its processing charges for electronic modes of payment up to the end
of March 2011.
In order not to be involved with day-to-day operations of the retail payment system, RBI has
encouraged the setting up of National Payment Corporation of India (NPCI) to act as an umbrella
organisation for operating the various retail payment systems in India. NPCI has since become
functional and is in the process of setting up its roadmap. NPCI will be an authorised entity
under the Payment & Settlement Systems Act and would, therefore, be subjected to regulation
and supervision of RBI.
Credit/ Debit cards are widely used in the country as they provide a convenient form of making
payments for goods and services without the use of cheques or cash. Issuance of credit card
is a service where the customer is provided with credit facility for purchase of goods and
services in shops, restaurants, hotels, railway bookings, petrol pumps, utility bill payments,
etc. The merchant establishment who accepts credit card payments claims the amount
subsequently from the customer's bank through his own bank. The card user is required to
pay only on receipt of the bill and this payment can be either in full or partially in instalments.
Banks issuing credit cards earn revenue from their customers in a variety of ways such as
joining fee, additional card fee, annual fee, replacement card fee, cash advance fee, charge
slip/ statement retrieval fee, charges on over limit accounts and late payment fee, interest on
delayed payment, interest on revolving credit, etc. The fees may vary based on the type of
card and from bank to bank. Banks earn income not only as issuers of credit cards, but also as
acquirers where the transaction occurs on a point of sale terminal installed by the bank.38 As
the Indian economy develops, it is expected that the retail market will increasingly seek short-
term credit for personal uses, and to a large extent, this rising demand would be met by the
issuance of credit cards.
Debit Card is a direct account access card. Unlike a credit card, in the case of a debit card, the
entire amount transacted gets debited from the customer's account as soon as the debit card
is used for purchase of goods and services. The amount permitted to be transacted in debit
card is to the extent of the amount standing to the credit of the card user's account.
38
A Point of Sale (POS) terminal is the instrument in which the credit card is swiped. The bank that installs POS
terminal is called the acquirer bank. A more detailed discussion on the POS terminal is given in Section 8.1.3.
70
Automated Teller Machines (ATMs) are mainly used for cash withdrawals by customers. In
addition to cash withdrawal, ATMs can be used for payment of utility bills, funds transfer
between accounts, deposit of cheques and cash into accounts, balance enquiry and several
other banking transactions which the banks owning the ATMs might want to offer.
NRI, as an individual, can remit funds into India through normal banking channels using the
facilities provided by the overseas bank. Alternately, an NRI can also remit funds through
authorised Money Transfer Agents (MTA). Further, a number of banks have launched their
inward remittance products which facilitate funds transfer on the same day/ next day.
Many banks have branches rendering cash management services (CMS) to corporate clients,
for managing their receivables and payments across the country. Under cash management
services, banks offer their corporate clients custom-made collection, payment and remittance
services allowing them to reduce the time period between collections and remittances, thereby
streamlining their cash flows. Cash management products include physical cheque-based
clearing, electronic clearing services, central pooling of country-wide collections, dividend and
interest remittance services and Internet-based payment products. Such services provide
customers with enhanced liquidity and better cash management.
The Reserve Bank of India (RBI) has allowed the Scheduled Commercial Banks (SCBs) to
undertake certain financial services or para-banking activities and has issued guidelines to
SCBs for undertaking these businesses. The RBI has advised banks that they should adopt
adequate safeguards so that para-banking activities undertaken by them are run on sound
and prudent lines. Banks can undertake certain eligible financial services either departmentally
or by setting up subsidiaries, with prior approval of RBI.
Primary Dealers can be referred to as Merchant Bankers to Government of India. In 1995, the
Reserve Bank of India (RBI) introduced the system of Primary Dealers (PDs) in the Government
Securities Market, which comprised independent entities undertaking Primary Dealer activity.
In order to broad base the Primary Dealership system, banks were permitted to undertake
Primary Dealership business in 2006-07. To do primary dealership business, it is necessary to
have a license from the RBI.
71
The two primary objectives of the PD system are to:
improve secondary market trading system, which would (a) contribute to price discovery,
(b) enhance liquidity and turnover and (c) encourage voluntary holding of government
securities.
Net NPAs of less than 3 per cent and a profit making record for the last three years
Investment Banking is not one specific function or service but rather an umbrella term for a
range of activities. These activities include issuing securities (underwriting) for companies,
managing portfolios of financial assets, trading securities (stocks and bonds), helping investors
purchase securities and providing financial advice and support services. It can be seen that all
these services are capital market related services. These services are offered to governments,
companies, non-profit institutions and individuals.
As per the Securities and Exchange Board of India (SEBI) (Merchant Bankers) Rules, 1992 and
SEBI (Merchant Bankers) Regulations, 1992, merchant banking service is any service provided
in relation to issue management either by making arrangements regarding selling, buying or
subscribing securities as manager, consultant, advisor or rendering corporate advisory service
in relation to such issue management. This, inter alia, consists of preparation of prospectus
and other information relating to the issue, determining financial structure, tie up of financiers
and final allotment and refund of the subscription for debt/ equity issue management and
acting as advisor, consultant, co-manager, underwriter and portfolio manager. In addition,
merchant banking services also include advisory services on corporate restructuring, debt or
equity restructuring, loan restructuring, etc. Fees are charged by the merchant banker for
72
rendering these services. Banks and Financial Institutions including Non Banking Finance
Companies (NBFCs) providing merchant banking services are governed by the SEBI Rules and
Regulations stated above.
On the other hand, the term 'Investment Banking' has a much wider connotation and has
gradually come to refer to all types of capital market activity. Investment banking thus
encompasses not merely merchant banking but other related capital market activities such as
stock trading, market making, broking and asset management as well as a host of specialized
corporate advisory services in the areas of mergers and acquisitions, project advisory and
business and financial advisory.
Investment banking has a large number of players: Indian and foreign. The large foreign
investment banks such as Goldman Sachs and Merrill Lynch (which are standalone investment
banks) have entered India attracted by India's booming economy. However, the Indian
investment banking firms (including investment banking arms of Indian commercial banks)
have generally succeeded in holding their own as they are able to service both small and large
customers. However, one area foreign banks still dominate is global mergers and acquisitions.
A number of banks, both in the private and public sectors have sponsored asset management
companies to undertake mutual fund business. Banks have entered the mutual fund business,
sometimes on their own (by setting up a subsidiary) and sometimes in joint venture with
others. Other banks have entered into distribution agreements with mutual funds for the sale
of the latter's mutual fund products, for which they receive fees. The advantage that banks
enjoy in entering the mutual fund businesses is mainly on account of their wide distribution
network.
Indian banks which have sponsored mutual fund business so far include ICICI Bank, HDFC
Bank and Kotak Mahindra Bank in the private sector, and State Bank of India in the public
sector. As per AMFI (Association of Mutual Funds in India) data, total assets under management
of all mutual funds in India amounted to Rs. 417, 300 crores as on March 31, 2009, of which
bank sponsored mutual funds accounted for 15.5 percent.
Money Market Mutual Funds (MMMFs) come under the purview of SEBI regulations. Banks and
Financial Institutions desirous of setting up MMMFs would, however, have to seek necessary
clearance from RBI for undertaking this additional activity before approaching SEBI for
Consequent upon the issue of Government of India Notification dated May 24, 2007, banks
73
have been advised that they may now undertake Pension Funds Management (PFM) through
their subsidiaries set up for the purpose. This would be subject to their satisfying the eligibility
criteria prescribed by Pensions Fund Regulatory and Development Authority (PFRDA) for Pension
Fund Managers. Banks intending to undertake PFM should obtain prior approval of RBI before
engaging in such business.
The RBI has issued guidelines for banks acting as Pension Fund Managers. According to the
guidelines, banks will be allowed to undertake PFM through their subsidiaries only, and not
departmentally. Banks may lend their names/ abbreviations to their subsidiaries formed for
PFM, for leveraging their brand names and associated benefits thereto, only subject to the
banks maintaining 'arm's length' relationship with the subsidiary. In order to provide adequate
safeguards against associated risks and ensure that only strong and credible banks enter into
the business of PFM, the banks complying with the following eligibility criteria (as also the
solvency margin prescribed by PFRDA) may approach the RBI for necessary permission:
Net worth of the bank should be not less than Rs.500 crores.
CRAR should be not less than 11% during the last three years.
Bank should have made net profit for the last three consecutive years.
Pension Fund Regulatory and Development Authority (PFRDA) had invited Expressions of
Interest from public sector entities for sponsoring Pension Funds for Government employees
on 11th May 2007. In response, Expressions of Interest were received from seven public
sector entities. A committee constituted by the PFRDA has selected State Bank of India
(SBI), UTI Asset Management Company (UTIAMC) and Life Insurance Corporation (LIC) to
be the first sponsors of pension funds in the country under the new pension system (NPS)
for government employees.
In the depository system, securities are held in depository accounts in dematerialized form.
Transfer of securities is done through simple account transfers. The method does away with
the risks and hassles normally associated with paperwork. The enactment of the Depositories
Act, in August 1996, paved the way for the establishment of National Securities Depository
Limited (NSDL) and later the Central Depository Services (India) Limited (CDSL). These two
institutions have set up a national infrastructure of international standards that handles most
of the securities held and settled in dematerialised form in the Indian capital markets.
74
As a depository participant of the National Securities Depository Limited (NSDL) or Central
Depository Services (India) Limited (CDSL), a bank may offer depository services to clients
and earn fees. Custodial depository services means safe keeping of securities of a client and
providing services incidental thereto, and includes:
collecting the benefit of rights accruing to the client in respect of the securities;
keeping the client informed of the action taken or to be taken by the issuer of securities,
having a bearing on the benefits or rights accruing to the client; and
A number of banks and financial institutions are seeking a share in the fast-growing wealth
management services market. Currently, a high net worth individual can choose from among
a number of private sector and public sector banks for wealth management services.40 In
addition to high net worth resident Indians, non-resident Indians (NRIs) form a major chunk
of the customer base for personal wealth management industry in India.
Banks that do portfolio management on behalf of their clients are subject to several regulations.
No bank should introduce any new portfolio management scheme (PMS) without obtaining
specific prior approval of RBI. They are also to comply with the guidelines contained in the
SEBI (Portfolio Managers) Rules and Regulations, 1993 and those issued from time to time.
The following conditions are to be strictly observed by the banks operating PMS or similar
scheme:
PMS should be entirely at the customer's risk, without guaranteeing, either directly or
indirectly, a pre-determined return.
Funds should not be accepted for portfolio management for a period less than one
year.
39
Portfolio management deals with only financial assets whereas wealth management covers both financial assets
and non financial assets such as real estates.
40
HSBC Private Bank in Asia, for example, provides the full spectrum of private banking solutions for affluent
individuals and their families. Their services include investment services, family wealth advisory services, private
wealth solutions such as wealth planning and protection, and traditional banking services. (Source: HSBC website)
75
Portfolio funds should not be deployed for lending in call/ notice money; inter-bank
term deposits and bills rediscounting markets and lending to/ placement with corporate
bodies.
Banks should maintain clientwise account/ record of funds accepted for management
and investments made and the portfolio clients should be entitled to get a statement
of account.
Banks' own investments and investments belonging to PMS clients should be kept
distinct from each other, and any transactions between the bank's investment account
and client's portfolio account should be strictly at market rates.
6.2.7 Bancassurance
With the issuance of Government of India Notification dated August 3, 2000, specifying
'Insurance' as a permissible form of business that could be undertaken by banks under Section
6(1) (o) of the BR Act, banks were advised to undertake insurance business with a prior
approval of the RBI. However, insurance business will not be permitted to be undertaken
departmentally by the banks.
A number of banks (both in public and private sectors) have entered into joint venture
partnerships with foreign insurance companies for both life and non-life insurance business. At
present, Indian partners (either alone or jointly) hold at least 74% of Indian insurance joint
ventures. This is because the maximum holding by foreign companies put together cannot
exceed 26% of the equity of Indian insurance ventures. Laws and regulations governing
insurance companies currently provide that each promoter should eventually reduce its stake
to 26% following the completion of 10 years from the commencement of business by the
concerned insurance company.
The advantage that banks have in entering the insurance business is mainly on account of
their wide distribution network. Banks are able to leverage their corporate and retail customer
base for cross selling insurance products. Banks collect fees from these subsidiaries for
generating leads and providing referrals that are converted into policies.
Some of the insurance joint ventures promoted by banks have become leaders in the insurance
business. ICICI Bank and HDFC Bank have promoted joint ventures in both life and non-life
business, while State Bank of India (SBI) has promoted a successful joint venture in life
business.
76
In addition, some banks distribute Third Party Insurance Products. With a view to provide
"one stop banking" to their customers, banks distribute life insurance products and general
insurance products through their branches. Banks have entered into agency agreements with
life and non-life companies to distribute their various insurance products, for which they are
paid a commission. The personnel involved in selling these insurance products have to be
authorised by the IRDA regulations to act as specified persons for selling insurance products.
According to IRDA Annual Report, 2007-08, out of the total new life insurance business
premium acquired by all the life insurance companies during 2007-08, new business sold
through banks accounted for 7.28% of total new life insurance premium, with Life Insurance
Corporation of India being able to distribute only 1.15% of its total new business premium
through banks, while the other new life insurance ventures were able to sell 18.20% of their
new business premium through banks.
77
CHAPTER 7: Relationship between Bank and
Customer
In India, banks face a challenge of providing services to a broad range of customers, varying
from highly rated corporates and high net worth individuals to low-end depositors and borrowers.
Banks usually place their customers into certain categories so that they are able to (a) develop
suitable products according to customer requirements and (b) service customers efficiently.
The bank-customer relationship is influenced by several dimensions, notably:
While banks are competing with each other to attract the most profitable businesses,
financial inclusion is increasingly becoming part of their agenda. An important issue in
India is that a large number of people, nearly half of the adult population, still do not
have bank accounts. 'Financial Inclusion' would imply bringing this large segment of
the population into the banking fold.
Second, banks have started using innovative methods in approaching the customers;
technology is an important component of such efforts.
Finally, on account of security threats as well as black money circulating in the system,
care has to be taken to identify the customers properly, know the sources of their
funds and prevent money laundering.
Appropriate targeting
Product life cycle strategy means addressing the concerns of the customers throughout the life
cycle of the banking products. The product life-cycle strategy of a banking product (such as
bank loans) comprises:
Complaint handling.
78
7.1.2 Appropriate targeting41
Banks focus on customer profiles and offer differentiated deposit and credit products to various
categories of customers depending on their income category, age group and background. For
example, banks may segment various categories of customers to offer targeted products,
such as Private Banking for high net worth individuals, Defence Banking Services for defence
personnel, and Special Savings Accounts for trusts.
Banks use multiple channels to target specific segments of population. Banks deliver their
products and services through a variety of channels, ranging from traditional bank branches,
extension counters and satellite offices, to ATMs, call centres and the Internet. Some private
banks also appoint direct marketing agents or associates, who deliver retail credit products.
These agents help banks achieve deeper penetration by offering doorstep service to the customer.
To ensure that their customers get 'one-stop solution', a key component of banks' customer
strategy is to offer an expanded set of products and services. Through their distribution network,
banks may offer Government of India savings bonds, insurance policies, a variety of mutual
fund products, and distribute public offerings of equity shares by Indian companies. As a
depository participant of the National Securities Depository Limited (NSDL) and Central
Depository Services (India) Limited (CDSL), a bank may offer depository share accounts to
settle securities transactions in a dematerialized mode and so on.
retail customers;
corporate customers;
international customers;
rural customers.
A bank formulates its overall customer strategy to increase profitable business keeping in
mind its strengths and weaknesses. The key strategy components and trends in each of these
customer groups are briefly discussed below.
41
This aspect is described in detail under 7.2
79
7.2.1 Retail Customers
With growing household incomes, the Indian retail financial services market has high growth
potential. The key dimensions of the retail strategy of a bank include customer focus, a wide
range of products, customer convenience, widespread distribution, strong processes and prudent
risk management.
The fee income that banks earn while extending commercial banking services to retail customers
includes retail loan processing fees, credit card and debit card fees, transaction banking fees
and fees from distribution of third party products. Cross selling of the entire range of credit
and investment products and banking services to customers is often a key aspect of the retail
strategy.
There is widespread acceptance by the average consumer of using credit to finance purchases.
Given this background, retail credit has emerged as a rapidly growing opportunity for banks.
Banks also focus on growth in retail deposit base which would include low cost current account
and savings bank deposits. Retail deposits are usually more stable than corporate bulk deposits
or wholesale deposits.
Banks offer a range of retail products, including home loans, automobile loans, commercial
vehicle loans, two wheeler loans, personal loans, credit cards, loans against time deposits and
loans against shares. Banks also fund dealers who sell automobiles, two wheelers, consumer
durables and commercial vehicles. A few banks have set up home finance subsidiaries in order
to concentrate on this business in a more focused manner.
Personal loans are unsecured loans provided to customers who use these funds for various
purposes such as higher education, medical expenses, social events and holidays. Personal
loans include micro-banking loans, which are relatively small value loans to lower income
customers in urban and rural areas.
Credit cards have become an important component of lending to the retail segment in the case
of a number of banks. As the Indian economy develops, it is expected that the retail market
will seek short-term credit for personal uses, and the use of credit cards will facilitate further
80
Box 7.1: Share of retail loans in total loans
The share of retail loans in total loans and advances of Scheduled Commercial Banks (SCBs)
was 21.3% at end-March 2009. The maximum share was accounted for by housing loans
followed by 'other personal loans', auto loans, credit card receivables, loans for commercial
durables, in that order.
Most of the private and foreign banks have integrated the strategy with regard to small and
medium enterprises with their strategy for retail products and services. Hence, the retail
focus includes meeting the working capital requirements, servicing deposit accounts and
providing other banking products and services required by small and medium enterprises. Of
late, public sector banks are also very active in lending to this business segment. Banks often
adopt a cluster or community based approach to financing of small enterprises, that is, identifying
small enterprises that have a homogeneous profile such as apparel manufacturers or jewellery
exporters.
Corporate business covers project finance including infrastructure finance, cross border finance,
working capital loans, non-fund based working capital products and other fee-based services.
Banks often have to make special efforts to get the business of highly rated corporations.
The recent emphasis on infrastructure in India, including projects being built on private-public
partnership basis, is leading to profitable business opportunities in this area. Further, Indian
companies are also going global, and making large acquisitions abroad. This trend is likely to
pick up momentum in future and banks which gear themselves up to meet such requirements
from their customers will gain.
There is also a growing demand for foreign exchange services from the corporate customers.
Banks offer fee-based products and services including foreign exchange products, documentary
credits (such as letter of credit or LC) and guarantees to business enterprises. Corporate
customers are also increasingly demanding products and services such as forward contracts
and interest rate and currency swaps.
Indian banks while expanding business abroad have usually been leveraging home country
links. The emphasis has been on supporting Indian companies in raising corporate and project
finance overseas for their investments in India and abroad (including financing of overseas
acquisitions by Indian companies), and extending trade finance and personal financial services
(including remittance and deposit products) for non-resident Indians.
81
7.2.4 Rural Banking Customers
Over 70% of India's citizens live in rural areas. Hence, there is a need for banks to formulate
strategies for rural banking, which have to include products targeted at various customer
segments operating in rural areas. These customer segments include corporates, small and
medium enterprises and finally the individual farmers and traders. Primary credit products for
the rural retail segment include farmer financing, micro-finance loans, working capital financing
for agro-enterprises, farm equipment financing, and commodity based financing. Other financial
services such as savings, investment and insurance products customised for the rural segment
are also offered by banks.
In the retail markets, competition is intense among all categories of banks. However, even
though foreign banks have product and delivery capabilities, they are constrained by limited
branch network. Over the last decade, because of their stronger technology and marketing
capabilities, the new private sector banks have been gaining market share at the expense of
public sector banks. They are slowly moving towards the Tier II cities to tap potential business.
In the case of corporate business, public sector banks and the new private sector banks have
developed strong capabilities in delivering working capital products and services. Public sector
banks have built extensive branch networks that have enabled them to raise low cost deposits
and, as a result, price their loans and fee-based services very competitively. Their wide
geographical reach facilitates the delivery of banking products to their corporate customers
located all over the country.
Traditionally, foreign banks in India have been active in providing trade finance, fee-based
services and other short-term financing products to highly rated Indian corporations. A few
Indian public sector and private sector banks effectively compete with foreign banks in these
areas. The larger Indian commercial banks also compete with foreign banks in foreign currency
lending and syndication business. However, foreign banks are at an advantage due to their
larger balance sheets and global presence.
82
In project finance, a few large Indian commercial banks have entered the infrastructure finance
space. ICICI Bank and IDBI Bank have an advantage in this area as they were already in this
business in their previous avatar. However, given their strong deposit base, the larger commercial
banks have decided to expand their presence in this market. In project finance, foreign banks
have an advantage where foreign currency loans are involved.
Indian commercial banks have limited access to foreign currency resources, and hence, face
difficulty in participating in global takeovers and acquisitions being undertaken by Indian
companies, although some leading Indian banks are participating in financing such cases in a
limited manner.
In delivering sophisticated foreign exchange products like derivatives, a few Indian banks in
the private sector are competing with their foreign counterparts.
For products and services targeted at non-resident Indians and Indian businesses, there is
intense competition among a large number of banks, both Indian and foreign.
In the agriculture and priority segments, the principal competitors are the large public sector
banks, regional rural banks (RRBs) and cooperative banks. This is due to the extensive physical
presence of public sector banks and RRBs throughout India via their large branch networks
and their focus on agriculture and priority sectors.
RBI has set up a full-fledged Customer Service Department with a view to making banks more
customer-friendly and has taken a number of steps to disseminate instructions/ guidelines
relating to customer service and grievance redressal by banks by placing all customer related
notifications and press releases on its multi-lingual Internet site. Customers of commercial
banks can also approach the RBI with their grievances.
In February 2006, RBI set up the Banking Codes and Standards Board of India (BCSBI) as an
independent autonomous watchdog to ensure that customers get fair treatment in their dealings
42
Discussed under section 7.5.
83
with banks. The BCSBI has published the "Code of Banks' Commitments to Customers "(the
Code)" which sets minimum standards of banking practice and benchmarks in customer service
for banks to follow. Commercial banks have become members of the BCSBI and have adopted
the Code as their Fair Practice Code in dealings with customers.
Banks are required to constitute a Customer Service Committee of their respective Boards and
include experts and representatives of customers as invitees to enable improvements in the
quality of customer service.
Further, banks have to establish Customer Service Committee at branch level. Each bank is
also expected to have a nodal department / official for customer service in the Head Office and
each controlling office whom customers with grievances can approach in the first instance and
with whom the Banking Ombudsman and RBI can liaise.
Customer service is projected as a priority objective of banks along with profit, growth and
fulfilment of social obligations. Banks need to have a board approved policy for the following:
Banks should provide a complaints/ suggestions box at each of their offices. Further, at every
office of the bank, a notice requesting the customers to meet the branch manager may be
displayed regarding grievances, if the grievances remain unattended. A complaint book with
perforated copies in each set may be introduced, so designed as to instantly provide an
acknowledgement to the customers and intimation to the controlling office.
43
However, the banking system cannot depend only on regulatory steps to improve customer service. Market based
solutions are also necessary.
44
According to RBI guidelines, in the case of delays in collection of bills, the concerned bank should pay interest to
the aggrieved party for the delayed period in respect of collection of bills at the rate of 2% per annum above the
rate of interest payable on balances of Savings Bank accounts.
84
7.4.3 Giving Publicity to the Policies
Banks should ensure that wide publicity is given to the customer policies formulated by them
by placing them prominently on their web-site and also disseminating the policies through
notice board in their branches, issuing booklets/ brochures, etc.
Banks should develop policies for special categories of customers including sick/ old/
incapacitated persons, persons with disabilities, visually impaired persons, etc. Such a customer
may require identification through the customer's thumb or toe impression by two independent
witnesses known to the bank, one of whom should be a responsible bank official.
The scope of the secrecy law in India has generally followed the common law principles based
on implied contract. Bankers' obligation to maintain secrecy about customers' accounts arises
out of the contractual relationship between the banker and customer, and as such no information
should be divulged to third parties except under circumstances which are well defined. The
following exceptions are normally accepted, where:
At the time of opening of accounts of the customers, banks are required to collect certain
information; but they also collect a lot of additional personal information, which can be
potentially used for cross selling various financial services by the banks, their subsidiaries
and affiliates. Sometimes, such information is also provided to other agencies. RBI has
advised banks that the information provided by the customer for Know Your Customer
(KYC) compliance (see Section 7.5) while opening an account is confidential and divulging
any details thereof for cross selling or any other purpose would be in breach of customer
confidentiality obligations..
85
7.4.6 National Do Not Call Registry
With a view to reducing the number of unsolicited marketing calls received by customers, the
RBI has advised banks that all telemarketers, viz., direct selling agents/ direct marketing
agents engaged by them should be registered with the Department of Telecommunications
(DoT).
The Banking Ombudsman Scheme makes available an expeditious and inexpensive forum to
bank customers for resolution of complaints relating to certain services rendered by banks.
The Banking Ombudsman Scheme was introduced under Section 35 A of the Banking Regulation
Act (BR Act), 1949 with effect from 1995.
All Scheduled Commercial Banks, Regional Rural Banks and Scheduled Primary Co-operative
Banks are covered under the Scheme.
The Banking Ombudsman is a senior official appointed by the RBI to receive and redress
customer complaints against deficiency in certain banking services (including Internet banking
and loans and advances). At present, fifteen Banking Ombudsmen have been appointed, with
their offices located mostly in state capitals.
One can file a complaint before the Banking Ombudsman if (a) the reply to the representation
made by the customer to his bank is not received from the concerned bank within a period of
one month after the bank has received the representation, or (b) the bank rejects the complaint,
or (c) if the complainant is not satisfied with the reply given by the bank..
Further, and harassment and mental anguish suffered by
the complainant while passing such award.
86
7.5.4 Further recourse available
If a customer is not satisfied with the decision passed by the Banking Ombudsman, he can
approach the Appellate Authority against the Banking Ombudsman's decision. Appellate
Authority is vested with a Deputy Governor of RBI. He can also explore any other recourse
available to him as per the law. The bank also has the option to file an appeal before the
appellate authority under the scheme.
Banks are required to follow Know Your Customer (KYC) guidelines. These guidelines are
meant to weed out and to protect the good ones and the banks. With the growth in organized
crime, KYC has assumed great significance for banks. The RBI guidelines on KYC aim at
preventing banks from being used, intentionally or unintentionally, by criminal elements for
money laundering or terrorist financing activities. They also enable banks to have better
knowledge and understanding of their customers and their financial dealings. This in turn
helps banks to manage their risks better. The RBI expects all banks to have comprehensive
KYC policies, which need to be approved by their respective boards.
Banks should frame their KYC policies incorporating the following four key elements:
d) Risk Management.
Every bank should develop a clear Customer Acceptance Policy laying down explicit criteria for
acceptance of customers. The usual elements of this policy should include the following. Banks,
for example, should not open an account in anonymous or fictitious/ benami name(s). Nor
should any account be opened where the bank's due diligence exercises relating to identity
has not been carried out. Banks have to ensure that the identity of the new or existing
customers does not match with any person with known criminal background. If a customer
wants to act on behalf of another, the reasons for the same must be looked into.
However, the adoption of customer acceptance policy and its implementation should not become
too restrictive and should not result in denial of banking services to general public, especially
to those who are financially or socially disadvantaged.
87
7.6.2 Customer Identification Procedures
Customer identification means identifying the customer and verifying his/her identity by using
reliable, independent source documents, data or information. For individual customers, banks
should obtain sufficient identification data to verify the identity of the customer, his address
and a recent photograph. The usual documents required for opening deposit accounts are
given in Box 7.3. For customers who are legal persons, banks should scrutinize their legal
status through relevant documents, examine the ownership structures and determine the
natural persons who control the entity.
Box 7.3: Documents for opening deposit accounts under KYC guidelines
The Customer identification will be done on the basis of documents provided by the prospective
customer as under:
a) Passport or Voter ID card or Pension Payment Orders (Govt./PSUs) alone, whereon the
address is the same as mentioned in account opening form.
b) Any one
vii) Pension Payment Orders (Govt./PSUs), if the address differs from the one mentioned
in the account opening form
viii) Photo ID card issued to bonafide students of Universities/ Institutes approved by UGC/
AICTE
Proof of address
88
iv) Electricity bill
v) Telephone bill
x) Copies of registered leave & license agreement/ Sale Deed/ Lease Agreement may be
accepted as proof of address
xi) Certificate issued by hostel and also, proof of residence incorporating local address, as
well as permanent.
Ongoing monitoring is an essential element of effective KYC procedures. Banks can effectively
control and reduce their risk only if they have an understanding of the normal and reasonable
activity of the customer so that they have the means of identifying the transactions that fall
outside the regular pattern of activity. Banks should pay special attention to all complex,
unusually large transactions and all unusual patterns which have no apparent economic or
visible lawful purpose. Banks may prescribe threshold limits for a particular category of accounts
and pay particular attention to the transactions which exceed these limits..
Banks should further ensure that the provisions of Foreign Contribution (Regulation) Act, 1976
as amended from time to time, wherever applicable, are strictly adhered to.
Banks should, in consultation with their boards, devise procedures for creating risk profiles of
their existing and new customers and apply various anti-money laundering measures keeping
in view the risks involved in a transaction, account or banking/ business relationship.
89
Banks should prepare a profile for each new customer based on risk categorisation. The customer
profile may contain information relating to customer's identity, social/ financial status, nature
of business activity, information about his clients' business and their location etc. Customers
may be categorised into low, medium and high risk. For example, individuals (other than high
net worth individuals) and entities whose identities and sources of wealth can be easily identified
and transactions in whose accounts by and large conform to the known transaction profile of
that kind of customers may be categorised as low risk. Salaried employees, government owned
companies, regulators etc fall in this category. For this category of customers, it is sufficient to
meet just the basic requirements of verifying identity.
There are other customers who belong to medium to high risk category. Banks need to apply
intensive due diligence for higher risk customers, especially those for whom the sources of
funds are not clear. Examples of customers requiring higher due diligence.
Banks' internal audit and compliance functions have an important role in evaluating and ensuring
adherence to the KYC policies and procedures. Concurrent/ Internal Auditors should specifically
check and verify the application of KYC procedures at the branches and comment on the
lapses observed in this regard. have taken place within a month and the aggregate value of such
transactions exceeds Rs 10 Lakh;
c) all cash transactions where forged or counterfeit currency notes or bank notes have
been used as genuine and where any forgery of a valuable security or a document has
taken place facilitating the transaction; and
90
CHAPTER 8: Evolving Trends in Modern Banking
There are a number of trends evolving in modern banking, the most important of which relate
to (a) technology, (b) outsourcing of services and (c) financial inclusion
8.1 Technology
Banks in India have started using technology in a proactive manner. The huge number of bank
customers and their myriad needs are being met in increasingly sophisticated ways. In a
number of areas, the foreign banks and the new private sector banks have been the first
movers in the application of technology, but public sector banks are also catching up. One
major advantage that Indian banks have is the availability of major IT companies in India who
are the world leaders in IT applications.
Through its website, a bank may offer its customers online access to account information and
payment and fund transfer facilities. The range of services offered differs from bank to bank
depending mainly on the type and size of the bank. Internet banking is changing the banking
industry and affecting banking relationships in a major way (see box 8.1).
Shopping Online: One can shop securely online with the existing debit/credit card. This can
also be done without revealing the customer's card number.
Prepaid Mobile Refill: A bank's account holder can recharge his prepaid mobile phone with
this service.
Bill Pay: A customer can pay his telephone, electricity and mobile phone bills through the
Internet, ATMs, mobile phone and telephone.
Register & Pay: One can view and pay various mobile, telephone, electricity bills and insurance
premiums on-line. After registering, customers can get sms and e-mail alerts every time a
bill is received.
RTGS Fund Transfer: RTGS is an inter-bank funds transfer system, where funds are transferred
as and when the transactions are triggered (i.e. real time).
Online Payment of Taxes: A customer can pay various taxes online including Excise and
Service Tax, Direct Tax etc.
91
8.1.2 Mobile Banking Transactions
Some banks have started offering mobile banking and telebanking to customers. The expansion
in the use and geographical reach of mobile phones has created new opportunities for banks to
use this mode for banking transactions and also provide an opportunity to extend banking
facilities to the hitherto excluded sections of the society.
The RBI has adopted Bank Led Model in which mobile phone banking is promoted through
business correspondents of banks.45 The operative guidelines for banks on Mobile Banking
Transactions in India were issued on October 8, 2008. Only banks who have received one-time
approval from the RBI are permitted to provide this facility to customers.
Till June 30, 2009, 32 banks had been granted permission to operate Mobile Banking in
India, of which 7 belonged to the State Bank Group, 12 to nationalised banks and 13 to
private/ foreign banks.
To use smart cards/debit cards/credit cards for the purchase of an item or for payment of a
service at a merchant's store, the card has to be swiped in a terminal (known as Point of Sale
or POS terminal) kept at the merchant's store. As soon as the card is put on the terminal, the
details of the card are transmitted through dial-up or leased lines to a host computer. On
verification of the genuineness of the card, the transaction is authorised and concluded. It is
thus a means to 'check out' whether the cardholder is authorized to make a transaction using
the card. POS terminal is a relatively new concept.
A Point of Sale (PoS) terminal is an integrated PC-based device, with a monitor (CRT), PoS
keyboard, PoS printer, Customer Display, Magnetic Swipe Reader and an electronic cash drawer
all rolled into one. More generally, the POS terminal refers to the hardware and software used
for checkouts.
In recent years, banks are making efforts to acquire Point of Sale (PoS) terminals at the
premises of merchants across the country as a relatively new source of income. 'Acquiring' a
POS terminal means installing a POS terminal at the merchant premises. The installer of the
45
For more details on Business Correspondents of banks, please see 8.3.1
92
PoS terminals is the acquirer of the terminal and the merchants are required to hold an account
(merchant account) with the acquirer bank. The acquirer bank levies each transaction with a
charge, say 1% of the transaction value. This amount is payable by the merchant. Most
merchants do not mind absorbing this cost, because such facilities expand their sales. Some
merchants, however, pass on the cost to the customer. This business is known as merchant
acquisition business.
Banks are vying with one another for PoS machine acquisition, since it offers a huge opportunity
to generate additional income by increasing the card base and encouraging card holders to use
them for their merchant transactions. Leading banks--both in the public and private sectors--
are planning to install hundreds of thousands of these terminals across the country. Some
banks are planning joint ventures with global companies who have experience and expertise in
this area.
PoS terminals are predominantly used for sale and purchase transactions. The PoS terminals
have proved to be very effective in combating fraudulent transaction by on-line verification of
cards. Also, the RBI is expected to permit cash withdrawal transactions to cardholders from
PoS terminals installed with shopkeepers, mall stores, etc.
PoS terminals, having gained significant acceptance in metros, need to become more popular
in tier-2 and tier-3 cities. Public sector banks appear to be more interested in targeting the
smaller towns and cities where they have strong branch presence. The challenges of setting up
a widespread PoS network will be primarily (a) operational costs and (b) viability in smaller
towns and cities. Experts feel that once the technology stabilises and costs per unit comes
down, PoS terminals will become popular all over India.
However, there are risks involved in the process of outsourcing to a third party. These risks
include non-compliance with regulations, loss of control over business, leakage of customer
data, lack of expertise of the third party, poor service from third party, etc.
The key driving force behind outsourcing activities by any firm, irrespective of the nature of its
business, is cost saving. Initially, foreign banks were involved in outsourcing their activities in
order to leverage India's significant cost advantages. Organisations such as American Express,
Citibank, Standard Chartered Bank, ANZ Grindlays, HSBC, and ABN Amro have been outsourcing
93
their Information Technology Outsourcing (ITO)/Business Process Outsourcing (BPO)
requirements to leading Indian IT companies.
During the recent years, Indian banks also have started outsourcing their non-core activities.
The outsourced services may include software application support, maintenance of hardware
and software, hosting services, managing data centres, managing ATM networks across the
country, and disaster management. Further, banks are also giving contracts to third parties in
order to manage other support services such as call-support services, help-desk support,
credit card processing, cheque processing and clearing, ATM cash replenishment, cheque clearing
and collection, loan servicing, data processing, etc. The two main reasons for Indian banks
outsourcing non-core activities are similar to the overseas banks, i.e. cost consideration and
lack of expertise in certain areas. Through outsourcing, banks can also benefit from the domain
expertise of the service providers.
Outsourcing helps banks not only to focus on their core activities, but also in certain cases to
reduce the capital investment in developing the required infrastructure. Further, in-house
provision of services may be more expensive because they do not enjoy any economy of scale.
Service providers on the other hand may enjoy economies of scale because they cater to the
outsourcing requirements of a number of banks and companies and pass on some of the
benefits of scale to the outsourcing banks. It is not only the small banks who have started
outsourcing non-core activities; large public sector banks are also outsourcing their non-core
services.
Certain precautions need to be taken while outsourcing non-core functions. The legal contract
entered into with the vendors should be approved only after the quantification of benefits
through a thorough analysis. The vendor's domain knowledge is important for delivering the
services as per contract; for example customizing the bank's IT requirements. It is therefore
important for banks to verify whether the vendors have the required domain knowledge.
While outsourcing, the major concern for banks relates to security. There have been case
instances where the employees of vendors have leaked confidential information of clients. For
banks, such leakage of customer's account details can be disastrous, with huge fallout in
terms of monetary losses as well as severe damage to the bank's reputation.
To cope with the challenges, the RBI has proposed that the board of directors of a bank should
be responsible for the outsourcing policy as well as approving such activities undertaken by a
bank. In addition, the Information Technology Act, 2000 aims at tackling some aspects of
cyber crimes such as leakage of confidential information by vendors. As part of internal
inspection/audit, internal inspectors/ auditors in banks look into security related aspects of
outsourcing.
94
Going forward, it is expected that outsourcing in the banking sector in India will increase as
competition in the industry grows and support services increasingly becomes more sophisticated
and expensive.
Despite the expansion of the banking network in India since independence, a sizeable proportion
of the households, especially in rural areas, still do not have a bank account. Considerable
efforts have to be made to reach these unbanked regions and population. Financial Inclusion
implies providing financial services viz., access to payments and remittance facilities, savings,
loans and insurance services at affordable cost to those who are excluded from the formal
financial system. Box 8.3 gives indications of the low access to banking services in India.
National Sample Survey Organisation (NSSO) data reveal that 45.9 million farmer households
in the country (or 51.4% of the total) do not access credit, either from institutional or non-
institutional sources. Further, despite the vast network of bank branches, only 27% of total
farm households are indebted to formal sources; of which one-third also borrow from informal
sources. Farm households not accessing credit from formal sources as a proportion to total
farm households is especially high at 95.91%, 81.26% and 77.59% in the North Eastern,
Eastern and Central Regions respectively. Thus, apart from the fact that exclusion in general
is large, it also varies widely across regions, social groups and asset holdings. The poorer
the group, the greater is the exclusion.
The Lead Bank Scheme introduced by the RBI in 1969 is the earliest attempt by the RBI to
foster financial inclusion. Under the scheme, designated banks are made key instruments for
local development and entrusted with the responsibility of identifying growth centres, assessing
deposit potential and credit gaps and evolving a coordinated approach for credit deployment
in each district, in concert with other banks and other agencies. As at March 2009, there were
26 banks, mostly in the public sector, which have been assigned lead responsibility in 622
districts of the country.
The RBI's recent measures to promote financial inclusion includes: advising banks to open 'no
frills' accounts, introduction of Business Correspondent (BC)/ Business Facilitator (BF) model
and adoption of Information and Communication Technology (ICT) solutions for achieving
greater outreach.
95
Basic banking 'no-frills' account
To achieve the objective of greater financial inclusion, all banks have been advised by the RBI
to make available a basic banking 'no-frills' account either with 'nil' or very low minimum
balances. They have also been advised to keep the transaction charges low, which would
make such accounts accessible to vast sections of population. The nature and number of
transactions in such accounts could be restricted by the banks, but such restrictions must be
made known to the customer in advance in a transparent manner. The growth of such deposits
should be encouraged with affordable infrastructure and low operational costs through the use
of appropriate technology.
Scheduled Commercial Banks (SCBs) are making considerable efforts towards opening no-
frills accounts. The number of no-frills accounts opened by SCBs during 2006-07, 2007-08,
and 2008-09 were 6.73 million, 15.79 million and 33.02 million respectively.
With the objective of ensuring a greater financial inclusion and increasing the outreach of the
banking sector, the RBI has introduced business facilitators and business correspondent models
to enable banks to use the services of NGOs, Self Help Groups (SHGs) and micro finance
institutions as intermediaries in providing financial and banking services. These intermediaries
serve as the facilitators /correspondents of the banks.
In the business facilitator model, these intermediaries help the banks facilitate services such
as identification of borrowers, collection and preliminary processing of loan applications, creating
awareness about savings and other products, processing and submission of applications to
banks and post-sanction monitoring.
In addition to activities which the intermediaries can engage in the business facilitator model,
the scope of activities under the business correspondent's models include disbursal of small
value credit, recovery of principal/collection of interest, collection of small value deposits,
receipt and delivery of small value remittances etc.
Specify the suitable limits on cash holding by intermediaries, as also limits on individual
customer payments and receipts.
96
Require that the transactions are accounted for and reflected in the bank's books by
the end of the day or next working day.
Require all agreements/contracts with the customer to clearly specify that the bank is
responsible to the customer for acts of omission and commission of the business
facilitator / correspondent.
Banks pay reasonable commission/ fees to the Business Facilitators/ Correspondents. The
banks' agreement with them however should specifically prohibit them from charging any fees
to the customers for the services rendered by them on behalf of the banks.
Adoption of technology
To give an impetus to financial inclusion, the RBI has formulated a scheme to accelerate the
pace of adoption of the biometric access/ smart card based Electronic Benefit Transfer (EBT)
mechanism by the banks and roll out the EBT system in the States that are ready to adopt the
scheme. As per the scheme, RBI would partially reimburse the banks, for a limited period, the
cost of opening accounts with biometric access/ smart cards. Through these accounts, payment
of social security benefits, National Rural Employment Guarantee Act (NREGA) payments and
payments under other government benefit programmes would be routed.
India is experiencing an explosion in the use of mobile communication technology, and this
could be exploited by the financial sector for spreading the banking habit. Mobile phone users
belong to all strata of society, spread across urban, semi-urban and rural areas. However,
while encouraging the spread of cost-effective banking through mobile communications, it has
to be ensured that essential security features are maintained.
Micro Credit is defined as provision of credit and other financial services and products of very
small amount to the poor in rural, semi-urban and urban areas for enabling them to raise their
income levels. Micro Credit Institutions (MCIs) are those which provide these facilities.
Banks are allowed to devise appropriate loan and savings products and the related terms and
conditions including size of the loan, unit cost, unit size, maturity period, grace period, margins,
etc. Such credit covers not only consumption and production loans for various farm and non-
farm activities of the poor but also includes their other credit needs such as housing and
shelter improvements.
97
Self-Help Groups (SHGs)
As stated earlier, despite the expansion of the banking sector, the rural poor--particularly the
marginal farmers and landless labourers--depend to a very large degree on the moneylenders
for credit. Several studies have shown that Self Help Savings and Credit Groups have the
potential to bring together the banks and the rural poor.
A Self-Help Group (SHG) is a registered or unregistered group of 15-20 people who voluntarily
join together to save small amounts regularly. These pooled savings are used to make interest
bearing loans to group members. In addition to inculcating the habit of thrift, SHG activity
develops among its members the capacity to handle resources. When the group matures and
stabilizes, it gets linked to the banks under a SHG-banks linkage program and banks start
providing credit to SHGs. Note that banks provides credit to SHGs and not to individuals
belonging to the SHG. It is the SHGs who pass on the loans to the individuals. Thus, the SHGs
become responsible for repayment to the banks.
The group members use collective wisdom and peer pressure to ensure proper end-use of
credit and timely repayment thereof. Peer pressure acts as an effective substitute for collaterals.
Box 8.6 gives an indication of the financial inclusion through the self-help groups.
Under the SHGs-Bank Linkage Programme (SBLP) Approach, as on March 31, 2009, 4.2
million SHGs had outstanding loans of Rs.22, 680 crores from commercial banks, regional
rural banks and co-operative banks together. The share of commercial banks in total
outstanding loans is 71 per cent. Further, as on March 31, 2009, the number of SHGs
maintaining savings bank accounts with the banking sector was 6.1 million with outstanding
savings of Rs. 5,546 crores.
What are the advantages of financing through a SHG cut down
expenses on travel for completing paper work and on the loss of workdays in availing loans.
Since 1991-92, the National Bank for Agriculture and Rural Development (NABARD) has been
encouraging banks to extend micro credit loans to SHGs. The scheme was then extended to
RRBs and co-operative banks. More than 90 per cent of the groups linked with banks are
exclusive women's groups.
98
While the SHG-bank linkage programme has emerged as the dominant micro finance
dispensation model in India, other models too have evolved. For example, micro finance delivery
through microfinance institutions (MFIs) has also emerged as an important delivery channel.
99
References
2. Management of Banking & Financial Services, by Justin Paul & Padmalatha Suresh,
(Pearson Education)
3. Commercial Banking, The Management of Risk, Benton E Gup & James W Kolari, (Wiley
Student Edition)
4. Modern Banking, Theory & Practice, D.Muraleedharan, PHI, (Eastern Economy Edition)
8. Annual Report of the Reserve Bank of India for the Year 2008-09
11. Report on Currency and Finance 2006-08, "The Banking Sector in India: Emerging
Issues and Challenges", RBI.
100
MODEL TEST PAPER
Q.2 The Lead Bank scheme was introduced by the RBI in: [2 Marks]
(a) 1969
(b) 1973
(c) 1971
(d) 1967
Q.3 Under the new system, the RBI acts as the banker to Central Government only, not
banker to State Governments. [2 Marks]
(a) FALSE
(b) TRUE
Q.4 In a 'no frills' savings bank account, banks usually waive minimum balances condition.
[2 Marks]
(a) FALSE
(b) TRUE
Q.5 Hindu Undivided Families (HUFs) are not allowed to open current accounts. [1 Mark]
(a) FALSE
(b) TRUE
Q.6 While private sector banks and foreign banks earn fees for undertaking government
related business, public sector banks have to offer these services for free to government
departments. [2 Marks]
(a) TRUE
(b) FALSE
Q.7 The three main types of deposit accounts collected by banks are savings bank,
certificates of deposit and term deposits. [1 Mark]
(a) TRUE
(b) FALSE
101
Q.8 _____ and ____ are two major companies in the mortgage business and provide stiff
competion to the commercial banks in the disbursal of housing loans. [2 Marks]
(a) NEDFI, HDFC
(b) NEDFI, HUDCO
(c) HDFC, HUDCO
(d) LIC, HUDCO
Q.9 In case a depositor wishes to withdraw his deposits prematurely, banks [2 Marks]
(a) do not allow the same till maturity of the deposits
(b) charge a penalty for the same
(c) do not charge any penalty and allow the same
(d) do not allow premature withdrawal
Q.11 For filing and resolving customer complaints, the Banking Ombudsman [1 Mark]
(a) charges a fee of Rs. 500/-
(b) does not charge any fee
(c) charges a fee of Rs. 1500/-
(d) charges a fee of Rs. 1000/-
Q.12 Term deposits are meant for individuals and small businesses, and not for large
companies. [1 Mark]
(a) TRUE
(b) FALSE
Q.13 Foreign banks can be involved in all segments of personal loans, except in household
consumer finance. [2 Marks]
(a) FALSE
(b) TRUE
102
Q.14 In case a depositor is a sole proprietor and holds deposits in the name of the proprietory
concern as well as in the individual capacity, the maximum insurance cover is available
upto. [1 Mark]
(a) Rs. 1,00,000/-
(b) Rs. 2,00,000/-
(c) Rs. 5,00,000/-
(d) None of the above
Q.15 Banks give contracts to third parties in order to manage support services like
[2 Marks]
(a) help desk support
(b) credit card processing
(c) call support service
(d) All of the above
Q.16 In case of FCNR(B) Scheme, the period for fixed deposits is [2 Marks]
(a) as applicable to resident accounts
(b) for terms not less than 1 year and not more than 5 years
(c) for terms not less than 2 years and not more than 6 years
(d) at the discretion of the Bank
Q.17 To create a strong and competitive banking system, reform measures were initiated
in early 1990s. The thrust of these reforms was on [2 Marks]
(a) increasing operation efficiency
(b) strengthening supervision over banks
(c) developing technological and institutional infrastructure
(d) All of the above
Q.18 The past due debt collection policy of banks generally emphasizes on _________ at
the time of recovery [2 Marks]
(a) respect to customers
(b) appropriate letter authorising agents to collect recovery
(c) due notice to customers
(d) All of the above
Q.19 Bank sponsored mutual funds dominate the mutual fund industry [2 Marks]
(a) TRUE
(b) FALSE
103
Q.20 According to the risk diversification principle of bank lending, diversification should be
in terms of [2 Marks]
(a) customer base
(b) geographic location
(c) nature of business
(d) All of the above
Q.21 A Bank's aggregate exposure to the capital market, including both fund based and
non-fund based exposure to capital market, in all forms should not exceed 50% of its
net worth as on March 31 of the previous year. [2 Marks]
(a) TRUE
(b) FALSE
Q.22 Which of the following aspects are outlined by the loan policy of a bank? [2 Marks]
(a) rating standards
(b) lending procedures
(c) financial covenants
(d) All of the above
Q.23 Public sector banks are not allowed to enter the mutual fund business. [2 Marks]
(a) TRUE
(b) FALSE
Q.24 The RBI has adopted _____ Model in which mobile banking and is promoted through
business correspondents of banks [1 Mark]
(a) Bank Led
(b) Band Mobile
(c) Mobile
(d) All of the above
Q.25 Services offered to government departments include all the above except: [1 Mark]
(a) payments of salaries and pensions
(b) distributing RBI bonds to government departments
(c) direct and indirect tax collections
(d) remittance facilities
Q.26 In case of FCNR(B) Scheme, the period for fixed deposits is [2 Marks]
(a) for terms not less than 1 year and not more than 5 years
(b) for terms not less than 1 year and not more than 6 years
(c) for terms not less than 2 years and not more than 6 years
(d) for terms not less than 2 years and not more than 5 years
104
Q.27 The main advantage that banks have in entering the insurance ventures is the strong
capital base of banks. [1 Mark]
(a) FALSE
(b) TRUE
Q.28 While spreading the message of promotion of financial inclusion banks can make use
of Business Correspondents to facilitate the opening of 'no frills' accounts. [2 Marks]
(a) FALSE
(b) TRUE
Q.30 At present both IDBI and IDBI Bank operate as separate companies in the fields of
term lending and commercial banking businesses respectively. [2 Marks]
(a) FALSE
(b) TRUE
Q.31 The concept of base rate to be introduced with effect from July 1, 2010 would include
[2 Marks]
(a) product-specific operating cost
(b) credit risk premium
(c) tenor premium
(d) All of the above
Q.32 Credit appraisal can be done on a simplified basis by banks while carrying out credit
appraisal of smaller units. [2 Marks]
(a) TRUE
(b) FALSE
105
Q.34 While bulk of the forex treasury operations is on behalf of the clients, the major
portion of domestic treasury operations consists of proprietary trading [2 Marks]
(a) FALSE
(b) TRUE
Q.35 Banks have to ensure that underwriting commitments taken up by them in respect of
primary issue of shares or convertible debentures or units of equity-oriented mutual
funds comply with the ceiling prescribed for the banks' exposure to the capital markets.
[1 Mark]
(a) FALSE
(b) TRUE
Q.36 The concept of limited liablity introduced in Indian banking resulted in establishment
of [2 Marks]
(a) joint stock banks
(b) urban banks
(c) rural banks
(d) none of the above
Q.37 Derivative products like swaps cover only foreign exchange risks and not interest rate
risks. [1 Mark]
(a) FALSE
(b) TRUE
Q.38 Which of the following types of account fall under the time deposit category?
(i) Current account
(ii) Term deposit account [1 Mark]
(a) only (ii)
(b) only (i)
(c) (i) and (ii)
(d) None of the above
Q.39 An efficient financial intermediation process has which of the following components:
(i) effective mobilisation of savings
(ii) their allocation to the most productive uses [1 Mark]
(a) only (i)
(b) only (ii)
(c) (i) and (ii)
(d) None of the above
106
Q.40 A Nostro account is an account which an exporter maintains with a bank abroad
[2 Marks]
(a) FALSE
(b) TRUE
Q.42 Self-Help Groups are set up to basically borrow from banks without making any savings
contribution. [2 Marks]
(a) FALSE
(b) TRUE
Q.44 As at end-June 2008, the number of bank branches in the country was: [2 Marks]
(a) Between 75,000 to 80,000
(b) Between 70,000 to 75,000
(c) Between 80,000 to 85,000
(d) Between 65,000 to 70,000
Q.45 Loss assets comprise assets where a loss has been identified by [2 Marks]
(a) RBI
(b) Bank
(c) a and b
(d) None of the above
Q.46 RBI acts as the issuer of currency in India, but only the Central Government has the
right to destroy currency notes. [2 Marks]
(a) FALSE
(b) TRUE
107
Q. 47 No bank may hold shares in any company other than a subsidiary, exceeding 20.0%
of the paid up share capital of that company. [2 Marks]
(a) TRUE
(b) FALSE
Q.48 The deposits of regional rural banks are not covered by the DICGC [1 Mark]
(a) FALSE
(b) TRUE
Q.49 Normally, the following types of customers require higher due diligence under KYC
norms, except: [1 Mark]
(a) politically exposed persons (PEPs) of foreign origin.
(b) non-resident customers;
(c) farmers with land holding over 10 acres;
(d) high net worth individuals;
Q.50 The Banking Ombudsman is a senior official appointed by the RBI. [1 Mark]
(a) TRUE
(b) FALSE
Q.51 The RBI has prescribed that all SCBs should maintain their SLRs in [2 Marks]
(a) dated securities notified by RBI
(b) T-Bills of Government of India
(c) State Development Loans
(d) All the above
Q.53 With growing savings among households in India, the need for retail credit is declining.
[1 Mark]
(a) TRUE
(b) FALSE
108
Q.54 In India, the RBI prescribes the minimum SLR level for Scheduled Commercial Banks
in India in specified assets as a percentage of Bank's ______ [3 Marks]
(a) Net Demand and Time Liabilities
(b) Demand Liabilities
(c) Time Liability
(d) None of the above
Q.55 If the beneficiary of a cheque has lost the cheque, he can instruct the paying bank to
stop payment of the cheque without waiting for the account holder's instructions.
[2 Marks]
(a) TRUE
(b) FALSE
Q.56 The CRR refer to the share of _____ that banks have to maintain with RBI of their net
demand and time liabilities. [2 Marks]
(a) illiquid cash
(b) forex reserves
(c) gold
(d) liquid cash
Q.57 While outsourcing, the only consideration should be cost savings [2 Marks]
(a) TRUE
(b) FALSE
Q.59 Government securities issued by the Central Government are considered to be part of
SLR securities, but not securities issued by State Governments. [2 Marks]
(a) FALSE
(b) TRUE
Q.60 Under the Banking Regulation Act insurance is not included in the list of permissible
businesses. However, Ministry of Finance provides special permission to banks to
enter the insurance business. [2 Marks]
(a) FALSE
(b) TRUE
________________________________________
109
Correct Answers :
110 | https://ru.scribd.com/document/32997012/Bank-Management | CC-MAIN-2019-30 | en | refinedweb |
Provided by: libdrawtk-dev_1.0b-1_amd64
NAME
dtk_draw_shape - Draw a shape in the window
SYNOPSIS
#include <drawtk.h> void dtk_draw_shape(const dtk_hshape shp);
DESCRIPTION
dtk_draw_shape() draw the shape referenced by shp in the current window. The position of the drawing depends on the rotation and translation previously set by dtk_*move_shape() and dtk_*rotate_shape() (the rotation is first applied to the shape and then the translation). This function assumes there is a valid OpenGL rendering context in the calling thread. So a successfull call to dtk_make_current_window() should have been performed previously in the current thread.
RETURN VALUE
None.
SEE ALSO
dtk_move_shape(3), dtk_make_current_window(3) | http://manpages.ubuntu.com/manpages/precise/man3/dtk_draw_shape.3.html | CC-MAIN-2019-30 | en | refinedweb |
Company infoAbout us
Scaling a startup from side project to 20 million hits/month.
Handling millions of visitors on a $12 a month PythonAnywhere account using Django.
Scaling a popular internet radio station using PythonAnywhere and web2py.
HSK东西 Scripts: handling Chinese characters in Python 2.7 with a PythonAnywhere free account and Flask.
Scraping and number crunching for a sentiment analysis website with PythonAnywhere., 13 May 2019
Kudo's to you all for a great infrastructure. Last month a blog post hit the front page of hackernews and the site didn't flinch despite getting a [...] tonne of traffic (for me).
Duncan Murray,, 11 February 2019
Thank you for the wonderful platform! Our students really enjoyed working on the platform this semester.
Vasundhara, 19 December 2018
, 6 September 2018
PythonAnywhere is reliable, fast, easy as a piece of cake and that's what every developer's looking for.
Jakwan Hussain, 16 February 2018
The web interface is intuitive, and the site is amazing. I believe this is a must have for every Python fan.
Dániel Ernő Szabó, 2 February 2018
I love PythonAnywhere. The best support ever.
Mark Kelly, 2 February 2018, 2 March 2017
We found the site absolutely amazing. We were a team at HackIllinois 2017 and we tried to use $BIG_SCARY_COMPETITOR and $BIGGER_SCARIER_COMPETITOR to host our project but your platform was literally one single button. All we had to do was copy and paste our code and we were on the road!
Purdue IEEE EMBS, 27 February 2017
I'm a largely self-taught programmer, and PA has made the learning curve for developing a web app so much more accessible! Before I found PA I attempted to set up a VPS with [a competitor] but I found myself spending far too much time trying to learn how to manage the server and too little time actually developing anything. The set-up that PA has has allowed me to focus on my coding, to the extent that I have been able to successfully develop a fully-functional web app running the ticketing system for the [event] that I'm organising. I think you guys have a really really great product here, so I just want to say a big thank you for all the work that you've put into creating it!
mforcexvi, 3 January 2017 'Easy' difficulty and God Mode enabled.
George, 10 June 2016
I looked all over the internet for sites where I could easily deploy and honestly you guys are by far the best I saw and for free. I hope you guys know you have an awesome product.
Josh Cristol, UTCS, 25 April 2016
I'm not going to lie, your entire system is amazing. The in-browser console is particularly fun.... I've tried using $COMPETITOR and other full-fledged systems, but they all have far more complexity than I need. PythonAnywhere is perfect.
Douglas Franklin, 4 February 2016
I must commend you on the work you've done to make PythonAnywhere easy to use. Everything was a breeze and my app was running online in an hour!
Aditya Medhe, 11 January 2016
Just want to let you know you saved the day. I had at least two “rage quits” under my belt trying to deploy my site at other places, and then I found you! I was up in under an hour.
Cole Howard, 11 December 2015
I love your site, I have used it for several projects in the past. Python is an excellent language and the ability to deploy it in the Web using web2py is very nice. PythonAnywhere provides the fastest, easiest way of deploying web2py apps and I would recommend it to anyone in the market for such a service.
Riley Jones, 28 October 2015
I was under the gun to come up to speed with Python and MySQL. I was hearing horror stories about setting up Python/MySQL on Windows so I decided to use your site instead and I truly enjoyed the experience.
Trevor David, 10 September 2015
I came across PythonAnywhere for the first time & I was able to get up & running in few hours. I was able to accomplish all I want with in a day. Thanks a for developing this great platform.
Srinivas Annam, 3 August 2015
Just wanted to let you know that as a complete novice to web development PythonAnywhere has been absolutely fantastic at getting me off the ground. Until yesterday I had never deployed a site, ever. PythonAnywhere let me deploy my first one in just under 10 mins.
Rob Treharne, 11 June 2015
I was looking for a way to explore and learn a little bit about web2py and your platform is perfect for it. So far I’m finding it absolutely fantastic and effortless. Thanks for doing all the great job.
Emil Rozbicki, 10 April 2015
So I tried out your site, and I think it is absolutely fantastic! It is so easy to get up a bash or python console, import via git, install whatever libraries one can wish in the virtual environment. Perfect! I must admit that I've never tried any other web-host sites, but why would I need that when you offer everything I need? Keep up the good work!
An anonymous user, 9 April 2015
This was most probably the best button I clicked on the web.... It's really amazing. Almost the entire python stack is available (batteries I needed and batteries I never knew). But seriously, the infrastructure and ease to work with PythonAnywhere is amazing.
An anonymous user, 20 February 2015
I just deployed my app on pythonanywhere and I am finding it great. In minutes I had my app running on real server. Thanks to you guys, you have done a great job.
Harshit Jain, 13 January 2015
I've really enjoyed my time with PythonAnywhere, I learned so much and above everything else the customer service [is] truly astonishing. Harry, Glenn, and [Giles] do an amazing job and I will fully endorse PythonAnywhere when people approach me and want to start web development or want to host python projects.
Adam Wester, 19 November 2014
I have to say that this is a really great and unique hosting service, I don't know of any Python host that is even comparable. ($COMPETITOR maybe, but that is really restrictive compared to yours.)
Stefan Murawski, 3 November 2014
I find it really easy working with pythonanywhere, absolutely hassle free, well done there! ... by providing such service you inspire me to just be creative without having to worry much about deployment.
Jatinder Pal Singh, 6 October 2014
I really like pythonanywhere.com because I find it simple enough to deploy my django based projects in no time. I have also encouraged my peers to use pythonanywhere.com for deployment and testing. You guys are doing a great job! Kudos to pythonanywhere.com's developers and team.
User "isachin", 20 September 2014
PythonAnywhere is by far the most user-friendly hosting environment for django projects!
Alexis Kalamiotis, 28 August 2014
In addition to providing an incredibly reliable and easy-to-use service, PythonAnywhere also offers the best support out there. I emailed them with an issue I had, and Giles responded within a few minutes. My problem ended up not being related to PythonAnywhere at all, yet Giles was kind enough to help me nonetheless, and ended up solving my problem. PythonAnywhere has restored my faith in customer support and reliable service :) Thank you Giles and thanks PythonAnywhere!
An anonymous user, 20 June 2014
Your site kinda saved my life. I had a follow-up interview where I was supposed to write all this web-based code using Python and I couldn't get $COMPETITOR to play nice. With 48 hours left I stumbled across your site and managed to get the whole thing up and running. The interview went great! Python was INCREDIBLY easy to use and your tutorials were clear, crisp and got me up and running right away. I can't believe I'm getting so much for only $5 a month! P.S. I got the job!!!
An anonymous user, 6 April 2014.
User 'dmax999'
Love the concept of PythonAnywhere!!! First thing I tried was
import numpy as np and
import pandas as pd and when
that worked, I was pleasantly surprised and excited!
User 'johnv'
Oh wow, PythonAnywhere is working pretty good on my iPad! I really like the custom extra keys, very helpful. Now I can use Python literally anywhere. Keep up the awesome work. I am recommending your site to everyone I know.
Paul Eipper
I'm driving a plan at my workplace to migrate our Django web properties from Amazon to PythonAnywhere. The level of abstraction you guys provide is unparalleled!
Rudi MK
Nice work guys, PythonAnywhere is awesome.
Austin Godber
Wow! Guys! I don't know how to express this feeling. I am just mindblown at the moment! Thanks for this awesomeness! Go! Go! Go! Still mindblown! I probably don't have anything to say at the moment. The first thing I did was to do import numpy and scipy on IPython really hoping that I would not have it but was totally surprised not to see an error! Going forward I see this doing some kickass stuff for me.
User 'madhusudancs'
You offer a great and unique service. Thank you!
User 'blahsd'
I recently got a chromebook and have been getting into the idea of
using the browser for everything and not having have to worry about
the local file system. I love python.
Let me just say plainly that this service blew me away, really awesome. I was hoping for something like this, but I thought it didn't exist or would exist in the future. I appreciate all the work that must have gone into this. And the fact that there is a free version to try it out.
John Ianicello
PythonAnywhere is awesome! Thanks for making this.
User 'bjoern'
PythonAnywhere is amazing!
Ivan Grishaev
Great website, and the only one I've tried that works so simply.
Nick Faro | https://www.pythonanywhere.com/about/testimonials/ | CC-MAIN-2019-30 | en | refinedweb |
when
When doing a thing today, with the PS to do a few pictures, but change the suffix, so an error: Error Description: Began as a project at the eclipse on the error, but the folder is not actually find a mistake, but does output a lot of control with th
suffix, error error, failure, eclipse, png file, error message, mistake, workspace, cattle, png image, file extension, image formats, number changes, drawing toolsMay 26
1. Picture by getting the current time to rename, so that each time the page is displayed, the picture file name is not the same , Pictures from the cache and the browser will not be read public static String picPath=""; String picName="exa
lt, java util, public static void, new java, png file, images, string title, title string, public static string, file temp, temp path, false flagApril 28
J2ME add custom icons and icons on the two lines: MIDlet-Icon: ***. png MIDlet-1: GameName, ***. png, classmain ***. Png is the icon file name, such as the icon.png, / icon.png, / image / icon.p ng, etc.; which ¡ / ¡ That path, such as / icon.png ico
32x32 icon, png file, chinese name, sony ericsson, png image, nokia series, mobile application, mobile device, java mobile, image icon, image directory, 240x320, png files, directory icon, custom icons, nokia s60, samsung phone, 128x160, oftware, n90December 28
I believe we are all familiar with the article: "With the free GAE (Google App Engine) to build a strong Blog (micolog) site," We also believe that the understanding of the GAE, Also believe that many people optimistic about the GAE, even though
quot, email, google, png file, yaml, python, logo gif, cms, django, administrator account, gmail, mail, nonsense, upload photos, language support, gd, gae, article classification, website presentation, media imagesDecember 19
Whether writing technical tutorial, or show the desktop or application, I am afraid scrot (0.8) are essential tools ubuntu screenshot. scrot a screen capture tool liunx Jiaojiao leader, it is small and may well be powerful, concise and not the lack o
implementation, jp, png file, special circumstances, countdown, flexibility, hard disk, thumb, mouse selection, rectangular area, screenshot software, thumbnail, image file, image capture, essential tools, screen capture, option c, outer border, scrot, selection rectangleDecember 13
Just use AIR to finish the map editor, function relatively well be. Follow-up will also make greater adjustments. Here that, how to set the AIR desktop application icons. Create AIR application, the project will be generated automatically XXX-app.xml
lt, src directory, xml, png file, node, assets, desktop application, application icon, desktop window, map editor, design iconNovember 12
There is a problem in IE6, PNG file's opacoty attribute is not support.Someone give some solution that with IE filter or express.But this solution has a defects.It 's not support backgrond-position and background-repeat in CSS. DD_belatedPNG (http://
lt, script type, text javascript, attribute, js, png file, ie 6, dd, good solution, endifNovember 9
1: CCSpriteFrameCache 1): read from the *. plist file content settings CCSpriteFrameCache, the object is a global object / / Load all of the game's artwork up front. CCSpriteFrameCache * frameCache = [CCSpriteFrameCache sharedSpriteFrameCache]; [Fram
lt, source code, png file, bg, global object, velocity, atlas, incremental increase, sprite, texture, artwork, mobile 1, summary background, game artNovember 8
Css tools cited in paragraph 50: 50 CSS Tools section, including, CSS grid and layout tools, CSS optimization tools, CSS menu generation tool, CSS Button Generator, CSS rounded corners generator, CSS framewor
png file, page element, css sprites, content management system, performance mode, layout structure, optimization tools, css rounded corners, column layout, file css, grid layout, grid system, conventional tools, output columns, button generator, layout generators, grid builder, grid generator, grid tool, rapid designSeptember 21
Decoding and encoding written in Japanese to achieve Sourceforge.jp and swetake.com Here merge them into a jar file. Encoding test: import java.awt.Color; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import ja
import java, ioexception, sourceforge, type int, jar file, main string, png file, java awt, graphics2d, f system, code open source, setbackground, set backgroundSeptember 16
Microsoft recently released a cool new ASP.NET server controls, <asp:chart />, can be used in ASP.NET 3.5 in free, bring a rich browser-based scene graph: Download the free Microsoft chart control Download VS 2008 tool support for the chart control
pie chart, png file, data distribution, document visit, agenda item, tool support, pie pie, line charts, server control, scene graph, image server, control forum, free microsoft, control elements, rich graphics, microsoft chart control, asp chart, cumulative data, distribution data, doughnutAugust 17
NinePatchDrawable painting a retractable bitmap images, Android will be automatically resized to accommodate the display of the content. One example is the NinePatch background, using a standard Android button, the button must be scalable to accommod
suffix, png file, flexibility, background image, bottom line, character length, png image, tension, top of the line, apk, image view, static image, relative position, pixel border, image pixels, bitmap images, image regions, optional areaJuly 22
KJAVA popular here, not very busy, but it can not erase the love we have this group of people with the passion of J2ME development. Procedures on the screenshot, it is necessary to attach to your project, because the phone had never heard of double-j
lt, amp, public void, png file, passion, constructor, canvas, sleep, interpolation, initial value, love, png format, currenttimemillis, innovation, transformation, path image, screen shots, fps, image transfer, screenshot programJune 30
Introduction Plugins enhance the functionality of Openfire. This document is a developer's guide for creating plugins. Structure of a Plugin Plugins live in the plugins directory of openfireHome. When a plugin is deployed as a JAR or WAR file, it is
lt xml, utf 8, jsp, 32x32 icon, png file, optional database, xml file, guide introduction, optional user, file database, internationalization, database schema, web directory, meta data, readme file, plugin directory, developer guide, plugins directory, schema files, 16x16 iconJune 27
We all know that with a 20 U3D default terrain texture brush can basically meet the need for special needs, but if the terrain may be used when making a custom texture brush, after long testing and seeking information in the official forum was propos
png file, project directory, surprise, official forum, coincidence, special needs, fireworks, spelling, u3d, custom texture, terrain texture, importing graphics, terrain editor, data resource, gizmos, topographyJune 5 I
png file, browser windows, relevant sections, preview area, lattice, sdk tools, content area, image preview, png images, size ratio, drawing tools, left pane, border region, tool window, png files, purple section, preview view, content preview, box image, plot areaMay 31
Painting Style Box Draw 9-patch The Style Box drawing tool (draw9patch.bat) lets you easily through a WYSIWYG (WYS | WYG) editor to create a Jiugongge NinePatch map. About Jiugongge map and how it works, please read the Style Box Image Ninepatch Imag
png file, browser windows, relevant sections, image size, wysiwyg, lattice, sdk tools, content area, png images, size ratio, pixel border, left pane, tool window, png files, purple section, preview view, content preview, box image, plot area, pink palaceMay 31
Add a background of today want to change the subject position of the upper left background, not appear in the upper left corner of the screen but for a Did not find vapi later found: "Res / drawables / my_drawable.xml" <? Xml version = "
lt xml, utf 8, xmlns, png file, background image, gravity, image position, subject positionMay 7
Good evening, I am hoping for some help. I have a basic listbox with some list items. Is it possible to render a. Png file when a list item is mousedover or selected instead of just changing the background color? I have been playing around in blend b
lt, png file, microsoft, background color, zhou, msdn, image path, imagesource, msft, good evening, september 11April 28
I have been using Firefox, but when using IE to open pages found lots of pictures does not show, then read on for Firefox, under a number of normal, it seems my IE is broken. So he quickly google, then be in line to see a good summary of the article,
google, content type, png file, firefox, input box, numerical data, folders, perfect solution, database content, crux, dll file, image png, png images, registry problems, registry query, image filter, m4v, system32 folderApril 5
Decoding and encoding written in the Japanese to achieve Sourceforge.jp and swetake.com Here to merge them into a jar file. Encoding test: import java.awt.Color; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; im
boolean, import java, ioexception, sourceforge, type int, jar file, main string, png file, gbk, open source code, java awt, realization, getbytes, graphics2d, f system, fillrect, setcolor, rgb, setbackground, set backgroundMarch 29
Openfire Plugin Developer's Guide Introduction Plug-ins to enhance functionality Openfire. This document is a guide to developers to create plug-ins. The structure of a plug-in Plug-ins Plug-ins openfireHome stored in the directory. When deploying a
lt xml, directory structure, language internationalization, 32x32 icon, png file, optional database, storage directory, plug ins, xml file, guide introduction, package web, file storage, optional user, integrated management, web resources, database schema, meta data, directory web, schema files, custom webOctober 31
configure eclipse with mysql on linuxnginx proxy root of contextis NOT AVALID DATE TIMEhzzgdjspx.comhttq: url.cn 7fm53nxw.666jxjycom s.ihttp: localhost:5560 em: 192.168.68.131:7890 easoa themes mskin login login.jsphttps: pmsweijie360.com | http://www.quweiji.com/tag/png-file/ | CC-MAIN-2019-30 | en | refinedweb |
FreeRTOS 10.0.1 With NXP S32 Design Studio 2018.R1
FreeRTOS 10.0.1 With NXP S32 Design Studio 2018.R1
Need help updating your FreeRTOS 10.0.1? Here's how to do it with the NXP S32 Design Studio 2018.R1.
Join the DZone community and get the full member experience.Join For Free
NXP not only sells a general purpose microcontroller, but a portfolio of automotive devices that that includes a component for FreeRTOS. But that component in S32DS 2018.R1 comes with an old V8.2.1 FreeRTOS component:
FreeRTOS 8.2.1 in S32DS 2018.R1
So, what do I do if want to use the latest FreeRTOS (currently 10.0.1) with all the bells and whistles?
This article describes how to upgrade it to the latest and greatest FreeRTOS V10.0.1:
FreeRTOS 10.0.1 in S32DS 2018.R1
Outline
The latest FreeRTOS V10.0.1 has many benefits: it is under a more permissible license, plus it comes with all the latest features like static memory allocation or direct task notification. Because it is not possible to directly update the FreeRTOS component in S32DS, I’m using the McuOnEclipse FreeRTOS component for S32DS. That component version is always up to the latest FreeRTOS version, supports multiple IDE’s (CodeWarrior classic, CodeWarrior for MCU 10.x, Kinetis Design Studio, MCUXpresso IDE and now as well S32DS) and a broad range of microcontroller (S08, S12, DSC, ColdFire, Kinetis and now as well S32DS). Additionally, it seamlessly integrates SEGGER SystemView/RTT and Percepio FreeRTOS Tracealyzer.
At the time of this article, not all McuOnEclipse components have been ported to S32DS. More components will be released in the future.
This article describes how to add and use the FreeRTOS 10.0.1 McuOnEclipse components in S32DS, followed by a tutorial how to create a FreeRTOS project for the S32K144EVB board.
Example projects like the one discussed here are available on GithHub:
Creating the Project
In a first step, we create a basic project we can debug on the board.
In S32DS, use the menu File > New > S32DS Application Project:
New Application Project
Provide a name for the project and select the device to be used:
Create a S32 DS Project
Press Next. Click on the browse button to select the SDK:
Choose SDK
Select the S32K144_SDK_gcc SDK:
S32K144 SDK
Press OK. Now, the SDK is selected:
Press Finish to create the project. In Eclipse, I have now the project created:
Basic Project
Now, it would be good time to build (menu Project > Build Project) and debug (Menu Run > Debug) the project to verify everything is working so far. Doing the Debug, it will ask me for which configuration I want to use. I select the Debug one:
Debug Configuration
Depending on the debug connection, I can set it to OpenSDA:
OpenSDA
With this, I should be able to debug the project:
Debugging the Initial Project
Congratulations! You can now terminate the debug session with the red ‘stop’ button and switch back to the C/C++ perspective.
Blinky LED
In a next step, we mux the RGB LED pins on the S32K144EVB board. For this, double-click on the PinSettings component to open the Component Inspector for it:
Pin Muxing Settings
The RGB LEDs are on PTD0, PTD15, and PTD16. Route them in the Inspector as shown below:
Routed Pins
Then, generate code with the button in the components view:
Generating Code
Next, add the following code into main():
CLOCK_SYS_Init(g_clockManConfigsArr, CLOCK_MANAGER_CONFIG_CNT, g_clockManCallbacksArr, CLOCK_MANAGER_CALLBACK_CNT); CLOCK_SYS_UpdateConfiguration(0U, CLOCK_MANAGER_POLICY_FORCIBLE); PINS_DRV_Init(NUM_OF_CONFIGURED_PINS, g_pin_mux_InitConfigArr); PINS_DRV_SetPinsDirection(PTD, (1<<0U) | (1<<15U) | (1<<16U)); /* set as output */ PINS_DRV_SetPins(PTD, (1<<0U) | (1<<15U) | (1<<16U)); /* all LEDs off */ PINS_DRV_ClearPins(PTD, (1<<15U)); /* RED pin low ==> ON */ PINS_DRV_TogglePins(PTD, (1<<15U)); /* RED pin high => off */
Added Code to Main
This is to initialize the clock and GPIO pin drivers, followed by turning all LEDs off and then the Red one on and off.
Build and debug it on the board to verify everything is working as expected.
McuOnEclipse Component Installation
You need at the 1-July-2018 release or later.
Download from SourceForge the latest zip file and unzip it.
Use the menu Processor Expert > Import Component(s):
Processor Expert Import Components
Select *both* *.PEupd files and press Open:
Open .PEupd Files.PEupd Files
Specify/Select the component repository where to import the components:
Component Repository
If that repository does not exist yet, add a new one:
I’m using below the McuOnEclipse folder inside the S32DS installation folder. Create that folder first if it does not exist yet.
Add Repository
Because the Processor Expert in S32DS does not include all needed Processor Expert include files, another manual step is required. These extra files are present inside the package you have downloaded from SourceForge:
Adding FreeRTOS
Next, add the FreeRTOS component from the McuOnEclipse repository to the project:
Adding FreeRTOS Component to Project
This will bring in a few other components into the project. Open the Inspector view for the McuLibConfig component:
Configure it to use the S32K SDK:
In the FreeRTOS settings, verify that the ARM core is matching your board:
This completes the settings. Generate code:
Initializing Component Drivers
In other IDE’s (Kinetis Design Studio, CodeWarrior, …), the Processor Expert will initialize the component drivers. Because this is not implemented in the S32DS version, I have to call the Init functions separately. For this, add the following template to main.c:
static void Components_Init(void) { #define CPU_INIT_MCUONECLIPSE_DRIVERS /* IMPORTANT: copy the content from Cpu.c! */ /*------------------------------------------------------------------*/ /* copy-paste code from Cpu.c below: */ /*------------------------------------------------------------------*/ }
You find the code to copy at the end of Generated_Code\Cpu.c:
Initialization Code in Cpu.c
Copy that code and place it inside Components_Init() in main.c:
Unfortunately, this is a manual process. Whenever you add/remove a component, make sure you update the Components_Init() function.
Events
The next manual thing is about Processor Expert events. In other IDE’s, the Processor Expert is creating proper event modules. In S32DS, it only adds the events to the Events.c, which is not complete.
To solve this, first exclude the file Events.c from the build. Use the properties and turn on ‘Exclude resource from build’ (see “Exclude Source Files from Build in Eclipse“).
Then, include the header file and source file into main.c:
#include "Events.h" #include "Events.c"
This uses the preprocessor to place the event code into main.c:
FreeRTOS Task
Add the following code for a FreeRTOS task to main.c which blinks the green LED:
static void AppTask(void *param) { (void)param; /* not used */ for(;;) { PINS_DRV_TogglePins(PTD, (1<<16U)); /* blink green LED */ vTaskDelay(pdMS_TO_TICKS(1000)); /* wait 1 second */ } /* for */ }
Next, create the task in main() and start the scheduler:
if (xTaskCreate(AppTask, "App", 500/sizeof(StackType_t), NULL, tskIDLE_PRIORITY+1, NULL) != pdPASS) { for(;;){} /* error! probably out of memory */ } vTaskStartScheduler();
Now, build and debug.
Debugging FreeRTOS Application in S32DS
And, enjoy the blinking green LED:
Summary
It is great to see that Processor Expert at least continues to exist in the automotive part of NPX with the S32 Design Studio. However, that Processor Expert has been reduced to the S32K SDK and automatic component initialization and event handling needs a manual setup. Other than that, the first components work great in S32DS. And, for everyone using S32DS, the McuOnEclipse components offer the latest FreeRTOS and extra features like tickless idle mode, Segger RTT, Segger SystemViewer and Percepio Tracealyzer.
Happy updating!
Published at DZone with permission of Erich Styger , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/freertos-1001-with-nxp-s32-design-studio-2018r1 | CC-MAIN-2019-30 | en | refinedweb |
This section describes HIDL data types. For implementation details, see HIDL C++ (for C++ implementations) or HIDL Java (for Java implementations).
Similarities to C++ include:
structsuse C++ syntax;
unionssupport C++ syntax by default. Both must be named; anonymous structs and unions are not supported.
- Typedefs are allowed in HIDL (as they are in C++).
- C++-style comments are allowed and are copied to the generated header file.
Similarities to Java include:
- For each file, HIDL defines a Java-style namespace that must begin with
android.hardware.. The generated C++ namespace is
::android::hardware::….
- All definitions of the file are contained within a Java-style
interfacewrapper.
- HIDL array declarations follow the Java style, not the C++ style. Example:
struct Point { int32_t x; int32_t y; }; Point[3] triangle; // sized array
Data representation
A
struct or
union composed of
Standard-Layout
(a subset of the requirement of plain-old-data types) has a consistent memory
layout in generated C++ code, enforced with explicit alignment attributes on
struct and
union members.
Primitive HIDL types, as well as
enum and
bitfield
types (which always derive from primitive types), map to standard C++ types
such as
std::uint32_t from
cstdint.
As Java does not support unsigned types, unsigned HIDL types are mapped to the corresponding signed Java type. Structs map to Java classes; arrays map to Java arrays; unions are not currently supported in Java. Strings are stored internally as UTF8. Since Java supports only UTF16 strings, string values sent to or from a Java implementation are translated, and may not be identical on re-translation as the character sets do not always map smoothly.
Data received over IPC in C++ is marked
const and is in
read-only memory that persists only for the duration of the function call. Data
received over IPC in Java has already been copied into Java objects, so it can
be retained without additional copying (and may be modified).
Annotations
Java-style annotations may be added to type declarations. Annotations are parsed by the Vendor Test Suite (VTS) backend of the HIDL compiler but none of such parsed annotations are actually understood by the HIDL compiler. Instead, parsed VTS annotations are handled by the VTS Compiler (VTSC).
Annotations use Java syntax:
@annotation or
@annotation(value) or
@annotation(id=value, id=value…)
where value may be either a constant expression, a string, or a list of values
inside
{}, just as in Java. Multiple annotations of the same name
can be attached to the same item.
Forward declarations
In HIDL, structs may not be forward-declared, making user-defined, self-referential data types impossible (e.g., you cannot describe a linked list or a tree in HIDL). Most existing (pre-Android 8.x) HALs have limited use of forward declarations, which can be removed by rearranging data structure declarations.
This restriction allows data structures to be copied by-value with a simple
deep-copy, rather than keeping track of pointer values that may occur multiple
times in a self-referential data structure. If the same data is passed twice,
such as with two method parameters or
vec<T>s that point to
the same data, two separate copies are made and delivered.
Nested declarations
HIDL supports nested declarations to as many levels as desired (with one exception noted below). For example:
interface IFoo { uint32_t[3][4][5][6] multidimArray; vec<vec<vec<int8_t>>> multidimVector; vec<bool[4]> arrayVec; struct foo { struct bar { uint32_t val; }; bar b; } struct baz { foo f; foo.bar fb; // HIDL uses dots to access nested type names } …
The exception is that interface types can only be embedded in
vec<T> and only one level deep (no
vec<vec<IFoo>>).
Raw pointer syntax
The HIDL language does not use * and does not support the full flexibility of C/C++ raw pointers. For details on how HIDL encapsulates pointers and arrays/vectors, see vec<T> template.
Interfaces
The
interface keyword has two usages.
- It opens the definition of an interface in a .hal file.
- It can be used as a special type in struct/union fields, method parameters, and returns. It is viewed as a general interface and synonym to
android.hidl.base@1.0::IBase.
For example,
IServiceManager has the following method:
get(string fqName, string name) generates (interface service);
The method promises to lookup some interface by name. It is also
identical to replace interface with
android.hidl.base@1.0::IBase.
Interfaces can be only passed in two ways: as top-level parameters, or as
members of a
vec<IMyInterface>. They cannot be members of
nested vecs, structs, arrays, or unions.
MQDescriptorSync & MQDescriptorUnsync
The
MQDescriptorSync and
MQDescriptorUnsync types
pass a synchronized or unsynchronized Fast Message Queue (FMQ) descriptors
across a HIDL interface. For details, see
HIDL C++ (FMQs are not
supported in Java).
memory type
The
memory type is used to represent unmapped shared memory in
HIDL. It is only supported in C++. A value of this type can be used on the
receiving end to initialize an
IMemory object, mapping the memory
and making it usable. For details, see
HIDL C++.
Warning: Structured data placed in shared
memory MUST be a type whose format will never change for the lifetime of the
interface version passing the
memory. Otherwise, HALs may suffer
fatal compatibility problems.
pointer type
The
pointer type is for HIDL internal use only.
bitfield<T> type template
bitfield<T> in which
T is a
user-defined enum suggests the value is a bitwise-OR of the
enum values defined in
T. In generated code,
bitfield<T> appears as the underlying type of T. For
example:
enum Flag : uint8_t { HAS_FOO = 1 << 0, HAS_BAR = 1 << 1, HAS_BAZ = 1 << 2 }; typedef bitfield<Flag> Flags; setFlags(Flags flags) generates (bool success);
The compiler handles the type Flags the same as
uint8_t.
Why not use
(u)int8_t/
(u)int16_t/
(u)int32_t/
(u)int64_t?
Using
bitfield provides additional HAL information to the reader,
who now knows that
setFlags takes a bitwise-OR value of Flag (i.e.
knows that calling
setFlags with 16 is invalid). Without
bitfield, this information is conveyed only via documentation. In
addition, VTS can actually check if the value of flags is a bitwise-OR of Flag.
handle primitive type
WARNING: Addresses of any kind (even physical device addresses) must never be part of a native handle. Passing this information between processes is dangerous and makes them susceptible to attack. Any values passed between processes must be validated before they are used to look up allocated memory within a process. Otherwise, bad handles may cause bad memory access or memory corruption.
HIDL semantics are copy-by-value, which implies that parameters are copied.
Any large pieces of data, or data that needs to be shared between processes
(such as a sync fence), are handled by passing around file descriptors pointing
to persistent objects:
ashmem for shared memory, actual files, or
anything else that can hide behind a file descriptor. The binder driver
duplicates the file descriptor into the other process.
native_handle_t
Android supports
native_handle_t, a general handle concept
defined in
libcutils.
typedef struct native_handle { int version; /* sizeof(native_handle_t) */ int numFds; /* number of file-descriptors at &data[0] */ int numInts; /* number of ints at &data[numFds] */ int data[0]; /* numFds + numInts ints */ } native_handle_t;
A native handle is a collection of ints and file descriptors that gets passed
around by value. A single file descriptor can be stored in a native handle with
no ints and a single file descriptor. Passing handles using native handles
encapsulated with the
handle primitive type ensures that native
handles are directly included in HIDL.
As a
native_handle_t has variable size, it cannot be included
directly in a struct. A handle field generates a pointer to a separately
allocated
native_handle_t.
In earlier versions of Android, native handles were created using the same
functions present in
libcutils.
In Android 8.0 and higher, these functions are now copied to the
android::hardware::hidl namespace or moved to the NDK. HIDL
autogenerated code serializes and deserializes these functions automatically,
without involvement from user-written code.
Handle and file descriptor ownership
When you call a HIDL interface method that passes (or returns) a
hidl_handle object (either top-level or part of a compound type),
the ownership of the file descriptors contained in it is as follows:
- The caller passing a
hidl_handleobject as an argument retains ownership of the file descriptors contained in the
native_handle_tit wraps; the caller must close these file descriptors when it is done with them.
- The process returning a
hidl_handleobject (by passing it into a
_cbfunction) retains ownership of the file descriptors contained in the
native_handle_twrapped by the object; the process must close these file descriptors when it is done with them.
- A transport that receives a
hidl_handlehas ownership of the file descriptors inside the
native_handle_twrapped by the object; the receiver can use these file descriptors as-is during the transaction callback, but must clone the native handle to use the file descriptors beyond the callback. The transport will automatically
close()the file descriptors when the transaction is done.
HIDL does not support handles in Java (as Java doesn't support handles at all).
Sized arrays
For sized arrays in HIDL structs, their elements can be of any type a struct can contain:
struct foo { uint32_t[3] x; // array is contained in foo };
Strings
Strings appear differently in C++ and Java, but the underlying transport storage type is a C++ structure. For details, see HIDL C++ Data Types or HIDL Java Data Types.
Note: Passing a string to or from Java through a HIDL interface (including Java to Java) will cause character set conversions that may not exactly preserve the original encoding.
vec<T> type template
The
vec<T> template represents a variable-sized buffer
containing instances of
T.
T can be one of the following:
- Primitive types (e.g. uint32_t)
- Strings
- User-defined enums
- User-defined structs
- Interfaces, or the
interfacekeyword (
vec<IFoo>,
vec<interface>is supported only as a top-level parameter)
- Handles
- bitfield<U>
- vec<U>, where U is in this list except interface (e.g.
vec<vec<IFoo>>is not supported)
- U[] (sized array of U), where U is in this list except interface
User-defined types
This section describes user-defined types.
Enum
HIDL doesn't support anonymous enums. Otherwise, enums in HIDL are similar to C++11:
enum name : type { enumerator , enumerator = constexpr , … }
A base enum is defined in terms of one of the integer types in HIDL. If no value is specified for the first enumerator of an enum based on an integer type, the value defaults to 0. If no value is specified for a later enumerator, the value defaults to the previous value plus one. For example:
// RED == 0 // BLUE == 4 (GREEN + 1) enum Color : uint32_t { RED, GREEN = 3, BLUE }
An enum can also inherit from a previously defined enum. If no value is
specified for the first enumerator of a child enum (in this case
FullSpectrumColor), it defaults to the value of the last
enumerator of the parent enum plus one. For example:
// ULTRAVIOLET == 5 (Color:BLUE + 1) enum FullSpectrumColor : Color { ULTRAVIOLET }
Warning: Enum inheritance works backwards from most other types of inheritance. A child enum value can't be used as a parent enum value. This is because a child enum includes more values than the parent. However, a parent enum value can be safely used as a child enum value because child enum values are by definition a superset of parent enum values. Keep this in mind when designing interfaces as this means types referring to parent enums can't refer to child enums in later iterations of your interface.
Values of enums are referred to with the colon syntax (not dot syntax as
nested types). The syntax is
Type:VALUE_NAME. No need to specify
type if the value is referenced in the same enum type or child types. Example:
enum Grayscale : uint32_t { BLACK = 0, WHITE = BLACK + 1 }; enum Color : Grayscale { RED = WHITE + 1 }; enum Unrelated : uint32_t { FOO = Color:RED + 1 };
Struct
HIDL does not support anonymous structs. Otherwise, structs in HIDL are very similar to C.
HIDL does not support variable-length data structures contained wholly within
a struct. This includes the indefinite-length array that is sometimes used as
the last field of a struct in C/C++ (sometimes seen with a size of
[0]). HIDL
vec<T> represents dynamically-sized
arrays with the data stored in a separate buffer; such instances are represented
with an instance of the
vec<T> in the
struct.
Similarly,
string can be contained in a
struct
(associated buffers are separate). In the generated C++, instances of the HIDL
handle type are represented via a pointer to the actual native handle as
instances of the underlying data type are variable-length.
Union
HIDL does not support anonymous unions. Otherwise, unions are similar to C.
Unions cannot contain fix-up types (pointers, file descriptors, binder
objects, etc.). They do not need special fields or associated types and are
simply copied via
memcpy() or equivalent. An union may not directly
contain (or contain via other data structures) anything that requires setting
binder offsets (i.e., handle or binder-interface references). For example:
union UnionType { uint32_t a; // vec<uint32_t> r; // Error: can't contain a vec<T> uint8_t b;1 }; fun8(UnionType info); // Legal
Unions can also be declared inside of structs. For example:
struct MyStruct { union MyUnion { uint32_t a; uint8_t b; }; // declares type but not member union MyUnion2 { uint32_t a; uint8_t b; } data; // declares type but not member } | https://source.android.com/devices/architecture/hidl/types | CC-MAIN-2019-30 | en | refinedweb |
This instructable describes an approach to read temperature and humidity data from a RuuviTag using Bluetooth with a Raspberry Pi Zero W and to display the values in binary numbers on a Pimoroni blinkt! pHAT.
Or to put it short: how to build a state of the art and a bit nerdy thermometer.
The RuuviTag is an open source sensor bluetooth beacon that comes with a temperature/humidity/pressure and accelation sensors, but may also act as a standard Eddystone™ / iBeacon proximity beacon. It was a very successfully Kickstarter project and I got mine a few weeks ago. There is a Github with python software to read the RuuviTag using a raspberry, and I have used one of their examples, with some additions.
The Raspberry Pi Zero W is the latest member of the RPi family, basically a Pi Zero with Bluetooth and WLAN added.
The blinkt! pHAT from Pimoroni is basically a strip of eight RBG LEDs configured as a HAT for the Raspberry Pi. It is very easy to use and comes with a python library.
The idea was to read the data from the RuuviTag and display it using the blinkt! HAT. The values are displayed as binary numbers using 7 of the LEDs, while the eight one is used to indicate if humidity or temperature (+/-/0) values are displayed.
Step 1: Setting Up the System
Setting up the system is easy:
- Switch on the RuuviTag (RuuviTag temperature sensor version) .
- Set up your RPi Zero W, RPi3, or any other RPi with bluetooth capacity added, following the instructions on.
- Place the blinkt! HAT on the RPi (while off).
- Install the blinkt! and RuuviTag software, as indicated on the corresponding GitHub pages.
- You now have to identify the MAC address of your RuuviTag
- copy the attached Python program, open it with IDLE for Python 3
- change the MAC address of the RuuviTag to yours, then save and run the program.
- feel free to modify and optimize the program.
The program comes as it is, to be used on your own risk, no liabilities are taken for any damages.
Step 2: The Device and the Program
As mentioned above, the idea was to construct a simple and inexpensive system to read data from the beacon and display numerical values on the blinkt! HAT, or a similar LED strip.
The range of values for temperature to be measured with a RPi based system will in most cases be somewhere between - 50°C and +80°C, for humidity between 0 and 100%. So a display that can give values from -100 to +100 will be sufficient for most applications. Decimal numbers smaller as 128 can be displayed as binary numbers with 7 bits (or LEDs). So the program takes the temperature and humidity values from the RuuviTag as "float" numbers and transforms them into binary numbers, which then are displayed on the blinkt!.
As a first step, the number is rounded, analysed if positive, negative or zero, and then transformed into a positive number using "abs". Then the decimal number is converted into a 7-digit binary number, basically a string of 0s and 1s, which gets analysed and displayed on the last 7 pixels of the blinkt!.
For temperature values the first pixel indicates if the value is positive (red), zero (magenta) or negative (blue). Displaying humidity values it is set to green. To simplify the discrimination between temperature and humidity values the binary pixels are set white for temperature and yellow for humidity. To enhance legibility of the binary numbers, "0" pixel are not turned completely off, but instead are set much weaker than in the "1" state. As blinkt! pixels are veery bright, you can set the general brightness changing the parameter "bright"
The program displays the values and parts of the process also on screen. In addition you will find several muted (#) print instructions. I left them in, as you may find them helpful to understand the process if unmuted.
The values might also be stored in a log file.
Step 3: Program Code
The code was a bit debugged and optimized. You may now find version 3 (20_03_2017).
' This program is intended to read the temperature, humidity and pressure values form a RuuviTag '
' and to display the temperature and humidity values as binary numbers on a Pimorini blinkt! HAT. ' '' ' It is based on the print_to_screen.py example from the ruuvitag library at github. ' ' Requires a Pi Zero W, Pi 3 or any other RPi equiped with bluetooth and all neccessary libraries installed.'
import time import os from datetime import datetime
from ruuvitag_sensor.ruuvi import RuuviTagSensor
from blinkt import set_clear_on_exit, set_pixel, clear, show
def temp_blinkt(bt): # this routine takes the temperature value and displays it as a binary number on blinkt!
clear ()
# color and intensity of "1"pixels : white r1 = 64 g1 = 64 b1 = 64
#color and intensity of "0" pixels : white r0 = 5 g0 = 5 b0 = 5
# Round and convert into integer r = round (bt)
# vz represents algebraic sign for indicator pixel if (r>0): vz = 1 # positive elif (r<0): vz= 2 # negative else: vz= 0 # zero # print (vz) i = abs(r) #print (i)
# transform to absolute, 7-digit binary number i1 = i + 128 # for i<127 -> results in a 8-digit binary number starting with 1 # print (i1)
b = "{0:b}".format(i1) # convert to binary # print (b)
b0 = str (b) # convert to string
b1 = b0[1:8] #truncate first bit print ("binary number: ", b1)
# Set pixels on blinkt!
# set binary number for h in range (0,7): f = (h+1) if (b1[h] == "1"): set_pixel (f, r1, g1, b1) # print ("bit ", h, " is 1, pixel ", f) else: set_pixel (f, r0, g0, b0) # print("nil")
# Set indicator pixel if (vz==1): set_pixel (0, 64, 0, 0) # red for positive values elif (vz==2): set_pixel (0, 0, 0, 64) # blue for negative values else: set_pixel (0, 64, 0, 64) # magenta if zero
show()
# end of temp_blinkt()
def hum_blinkt(bh): # this takes the humidity value and displays it as a binary number on blinkt!
clear()
# color and intensity of "1" pixels: yellow r1 = 64 g1 = 64 b1 = 0
#color and intensity of "0" pixels : r0 = 5 g0 = 5 b0 = 0
# Round and transform into integer r = round (bh)
# transform to absolute, 7-digit binary number i = abs(r) #print (i)
i1 = i + 128 # for i<127 -> gives a 8-digit binary number starting with 1 # print (i1)
b = "{0:b}".format(i1) # print (b)
b0 = str (b)
b1 = b0[1:8] #truncate first bit print ("binary number: ", b1)
# Set pixels on blinkt!
# set binary number to pixels for h in range (0,7): f = (h+1) if (b1[h] == "1"): set_pixel (f, r1, g1, b1) else: # mute to blank LEDs set_pixel (f, r0, g0, b0) # mute to blank LEDs
# Set indicator pixel set_pixel (0, 0, 64, 0) # green for humidity
show()
# end of hum_blinkt()
set_clear_on_exit()
# Reading data from the RuuviTag
mac = 'EC:6D:59:6D:01:1C' # Change to your own device's mac-address
print('Starting')
sensor = RuuviTagSensor(mac)
while True: data = sensor.update()
line_sen = str.format('Sensor - {0}', mac) line_tem = str.format('Temperature: {0} C', data['temperature']) line_hum = str.format('Humidity: {0} %', data['humidity']) line_pre = str.format('Pressure: {0}', data['pressure'])
print() # display temperature on blinkt! ba = str.format('{0}',data['temperature']) bt = float (ba) print (bt, " °C") temp_blinkt (bt) print()
time.sleep (10) # display temperature for 10 seconds
# display humidity on blinkt! bg = str.format('{0}',data['humidity']) bh = float (bg) print (bh, " %") hum_blinkt (bh) print ()
# Clear screen and print sensor data to screen os.system('clear') print('Press Ctrl+C to quit.\n\n') print(str(datetime.now())) print(line_sen) print(line_tem) print(line_hum) print(line_pre) print('\n\n\r.......')
# Wait for a few seconds and start over again try: time.sleep(8) except KeyboardInterrupt: # When Ctrl+C is pressed execution of the while loop is stopped print('Exit') clear() show () break
Participated in the
Sensors Contest 2017
Participated in the
Microcontroller Contest 2017
Discussions | https://www.instructables.com/id/RuuviTag-and-PiZero-W-And-Blinkt/ | CC-MAIN-2019-30 | en | refinedweb |
Importing Missing Namespaces
When you use types whose namespaces have not been imported in the file, ReSharper helps you locate these types and add the missing namespace import directives. If there are several missing missing namespaces for unresolved types, e.g. after you paste a block of code in the file, ReSharper would import all these namespaces in a single action.
ReSharper looks for non-imported alphabetical order: all
System.* namespaces go first, sorted alphabetically by the second word after dot; all the rest namespaces go next, in alphabetical order.
When you edit a code file, types with missing namespaces are detected with the design-time code inspection (so make sure that it is enabled), ReSharper lets you choose the namespace to import:
If for some reason you chose not to import a required namespace when the pop-up window was displayed, just press Esc to hide the pop-up./technologies:
The instructions and examples given here address the use of the feature in C#. For details specific to other languages, see corresponding topics in the ReSharper by Language section. | https://www.jetbrains.com/help/resharper/2017.1/Coding_Assistance__Importing_Namespaces.html | CC-MAIN-2019-30 | en | refinedweb |
Useful tips for testing redux in react with jest and enzyme.January 23, 2020 - 9 min read
Hi guys in this post I would like to share some useful tips I have found when testing. Having the opportunity of being working in a real project with react has taught me a thing or two. Patterns I found quite useful, I also managed to create a way to test redux as well, and how to separate concerns, when testing
react-redux.
This examples are using jest as the test suite and enzyme as the testing utility.
Testing wrapped components.
First let start with the simplest, when you’re using react with other libraries, you may have come across with wrapper functions. A wrapper function is a
HOC that as it name suggest it wraps your component to provide extra functionality.
react-redux has the
connect and react router has the
withRouter function. If your project leverages the use of any of those libraries you have probably used them. Testing those functions is very easy because what they do is provide additional props to your existing component.
When I was starting writing tests for a connected Redux component, I remember seeing this failure whenever I tried to write tests for connected components:
Invariant Violation: Could not find “store” in the context of “Connect(ComponentName)“. Either wrap the root component in a
or pass a custom React context provider to and the corresponding React context consumer to Connect(ComponentName) in connect options.
This is because our test suite unlike our application, is not wrapped in a
<Provider /> component, so it is not aware of the store context. To solve it without using a third party library. we can do the following; Take this component as an example:
import React from "react"; import { connect } from "react-redux"; export const Counter = ({ counter }) => { return ( <p> {counter} </p> ) } const mapStateToProps = state => ({ counter: state.counterReducer.counter }); export default connect(mapStateToProps)(Counter);
This is a really simple component that is connected to the redux store, in order to use a counter value. To be able to test it we need to create a named export of the component and test it instead of testing the default one that is wrapped with connect. Our test would look something like this:
import React from "react"; import { shallow } from "enzyme"; // Notice the non default export here import { Counter } from "./Counter"; let component; const mockProps = { counter: 0}; describe("Counter Component", () => { beforeAll(() => { component = shallow(<Counter {...mockProps} />); }); it("displays the counter value", () => { expect(component.find("p").text()).toBe("0"); }); });
What the connect function does, is that pass the store state to the component as props, in order to test the component we just need to mock the store state, and inject it as we do with regular props.
Same with dispatching actions, they are just part of the props, so in this example if we want to dispatch a certain action we have to do something like this:
// Rest of the imports import { bindActionCreators } from "redux"; import { incrementAction, decrementAction } from "redux-modules/counter/counter"; export const Counter = (props) => { const { counter, increment, decrement } = props; return ( <div> <p>{counter}</p> <button id="increment" type="button" onClick={() => increment()}> Increment </button> <button id="decrement" type="button" onClick={() => decrement()}> Decrement </button> </div> ); }; const mapDispatchToProps = dispatch => { return bindActionCreators( { increment: incrementAction, decrement: decrementAction }, dispatch );}; // Rest of the code export default connect( mapStateToProps, mapDispatchToProps )(Counter);
For those who don’t know
bindActionCreators is an utility that let us dispatch the action creator by just calling the function, without having to use the dispatch function. Is just a personal preference I like to use, so in the tests I can mock the increment function like this.
import React from "react"; import { shallow } from "enzyme"; // Notice the non default export here import { Counter } from "./Counter"; let component; const mockProps = { counter: 1, increment: jest.fn(() => 1), decrement: jest.fn(() => -1) }; describe("Counter Component", () => { beforeAll(() => { component = shallow(<Counter {...mockProps} />); }); it("displays the counter value", () => { expect(component.find("p").text()).toBe("0"); }); it("triggers the increment function", () => { component.find("#increment").simulate("click"); expect(mockProps.increment.mock.results[0].value).toBe(1); });});
If you see the highlights I’m mocking the function increment using
jest.fn(() => 1) and it should return
1, since the component is calling that function on an
onClick event of a button, I’m searching the right button by using its id and I’m simulating the click event; If a click happens on the real component, the increment function will be triggered and the action will be dispatched, in this case if a clicks happens I should be seeing my mock increment function being triggered as well, but it should return
1 instead of dispatching because that’s what I wanted to return in the test.
As you can see, here we test that a function is being called, we don’t test what the function does. You don’t need to test that the counter increments, because that is not a responsibility of the component, it’s a responsibility from the redux action.
Note: If you're using other libraries that use wrappers like withRouter from react router, you could do the named import and create an export that is not using a wrapper.
Testing the reducer:
To test the reducer I use a similar approach as the one that the redux docs use, what you’re doing is to test the reducer function, this function receives an state(which is the object containing the actual state) and an action(which is also an object) that it always has a type and sometimes it could have a payload.
Take this reducer from the same counter example.
const initialState = { counter: 0 }; // Reducer export default function reducer(state = initialState, action = {}) { switch (action.type) { case "INCREMENT": return { ...state, counter: state.counter + 1, }; case "DECREMENT": return { ...state, counter: state.counter - 1, }; default: return state; }}
This reducer is the one used to increment or decrement an initial counter set to
0. To test it we are going to prove that the cases asserts the expected return values, for example if the reducer receives an action with type
INCREMENT, it should increase the counter of the current state by
1. so we do a test like this one:
const initialState = { counter: 0 }; describe("reducers", () => { describe("counter", () => { let updatedState = {}; it("handles INCREMENT action", () => { updatedState = { counter: 1 }; expect( counterReducer( { ...initialState }, { type: "INCREMENT" } ) ).toEqual(updatedState); }); });});
PD: If you are wondering what the heck are
incrementAction and
decrementAction in the
Counter.js file above , it is just this:
export function incrementAction() { return { type: INCREMENT }; }
A function that returns an action. Is useful to avoid having to write the entire action object everytime you want to dispatch.
As you can see we just use the reducer function and pass the arguments that it needs, to return a new state. We can pass a modified state like
{ counter: 3 } and the action with type
DECREMENTand guess what, the
updatedState should be
{ counter: 2 }. With payloads on the action it is pretty similar, you just have to keep in mind that when you are sending a payload, you normally want to use that to perform additional computations or validations. so the
updatedState is going be updated based on that payload.
I like to separate the redux boilerplate from the react testing because I think this approach is a good way to ensure that everything works, separating concerns is the way to go, since you don’t need to test redux functionality in a component.
Testing selectors
Selectors are function that takes the state coming from redux and performs computations from them to return a new value. Imagine I have an state that has an array of user objects like this
{ name: "John", age 35 }, the array does not have an specific order, but is a requirement to show the list of users ordered by age. Selectors are useful to do that before the data is painted in the screen so if you have a selector like this one
const initialState = { users: [ { name: "Bob", age: 27 }, { name: "Anne", age: 18 }, { name: "Paul", age: 15 }, { name: "Pam", age: 30 }, ] }; export default function reducer(state = initialState, action = {}) { switch (action.type) { default: return state; } } // Selectors export const usersByAgeSelector = state => { return state.userReducer.users.sort((a, b) => a.age - b.age);}
Our test should be like this one:
describe("selectors", () => { const state = { userReducer: { users: [ // Unordered List ], } }; const orderedUsers = [ { name: "Paul", age: 15 }, { name: "Anne", age: 18 }, { name: "Bob", age: 27 }, { name: "Pam", age: 30 }, ]; describe("#usersByAgeSelector", () => { it("sorts the users based on the age attribute", () => { expect(usersByAgeSelector(state)).toEqual(orderedUsers); }); }); });
Same as the reducer, we’re just testing a function that sorts a given array of objects based on their attributes, this is pure unit testing. Only thing you have to notice, is that you have to pass a state atructure, so keep that in consideration, your test will fail if your root reducer structure is not the same as the one you’re passing in the selector.
That would be all for it, I’m missing side effects, but I think that should be for another post(I’m familiar testing
redux-saga), but I hope you like this post, if you find this helpful, or you think you it can be improved, please let me know.
Built and mantained by Jean Aguilar
Nobody likes you when you're twenty three | https://loserkid.io/tips-for-testing-react-components/ | CC-MAIN-2022-33 | en | refinedweb |
Have.
Large projects can contain thousands of lines of code, distributed in multiple source files, written by many developers and arranged in several subdirectories. A project may contain several component divisions. These components may have complex inter-dependencies — for example, in order to compile component X, you have to first compile Y; in order to compile Y, you have to first compile Z; and so on. For a large project, when a few changes are made to the source, manually recompiling the entire project each time is tedious, error-prone and time-consuming.
Make is a solution to these problems. It can be used to specify dependencies between components, so that it will compile components in the order required to satisfy dependencies. An important feature is that when a project is recompiled after a few changes, it will recompile only the files which are changed, and any components that are dependent on it. This saves a lot of time. Make is, therefore, an essential tool for a large software project.
Each project needs a Makefile — a script that describes the project structure, namely, the source code files, the dependencies between them, compiler arguments, and how to produce the target output (normally, one or more executables). Whenever the make command is executed, the Makefile in the current working directory is interpreted, and the instructions executed to produce the target outputs. The Makefile contains a collection of rules, macros, variable assignments, etc. (‘Makefile’ or ‘makefile’ are both acceptable.)
Installing GNU Make
Most distributions don’t ship make as part of the default installation. You have to install it, either using the package-management system, or by manually compiling from source. To compile and build from source, download the tarball, extract it, and go through the README file. (If you’re running Ubuntu, you can install make as well as some other common packages required for building from source, by running:
sudo apt-get install build-essential.)
A sample project
To acquaint ourselves with the basics of make, let’s use a simple C “Hello world” project, and a Makefile that handles building of the target binary. We have three files (below):
module.h, the header file that contains the declarations;
module.c, which contains the definition of the function defined in
module.h; and the main file,
main.c, in which we call the
sample_func() defined in
module.c. Since
module.h includes the required header files like
stdio.h, we don’t need to include
stdio.h in every module; instead, we just include
module.h. Here,
module.c and
main.c can be compiled as separate object modules, and can be linked by GCC to obtain the target binary.
module.h:
#include <stdio.h> void sample_func();
module.c:
#include "module.h" void sample_func() { printf("Hello world!"); }
main.c:
#include "module.h" void sample_func(); int main() { sample_func(); return 0; }
The following are the manual steps to compile the project and produce the target binary:
slynux@freedom:~$ gcc -I . -c main.c # Obtain main.o slynux@freedom:~$ gcc -I . -c module.c # Obtain module.o slynux@freedom:~$ gcc main.o module.o -o target_bin #Obtain target binary
(
-I is used to include the current directory (
.) as a header file location.)
Writing a Makefile from scratch
By convention, all variable names used in a Makefile are in upper-case. A common variable assignment in a Makefile is
CC = gcc, which can then be used later on as
${CC} or
$(CC). Makefiles use
# as the comment-start marker, just like in shell scripts.
The general syntax of a Makefile rule is as follows:
target: dependency1 dependency2 ... [TAB] action1 [TAB] action2 ...
Let’s take a look at a simple Makefile for our sample project:
all: main.o module.o gcc main.o module.o -o target_bin main.o: main.c module.h gcc -I . -c main.c module.o: module.c module.h gcc -I . -c module.c clean: rm -rf *.o rm target_bin
We have four targets in the Makefile:
allis a special target that depends on
main.oand
module.o, and has the command (from the “manual” steps earlier) to make GCC link the two object files into the final executable binary.
main.ois a filename target that depends on
main.cand
module.h, and has the command to compile
main.cto produce
main.o.
module.ois a filename target that depends on
module.cand
module.h; it calls GCC to compile the
module.cfile to produce
module.o.
cleanis a special target that has no dependencies, but specifies the commands to clean the compilation outputs from the project directories.
You may be wondering why the order of the make targets and commands in the Makefile are not the same as that of the manual compilation commands we ran earlier. The reason is so that the easiest invocation, by just calling the make command, will result in the most commonly desired output — the final executable. How does this work?
The make command accepts a target parameter (one of those defined in the Makefile), so the generic command line syntax is
make <target>. However, make also works if you do not specify any target on the command line, saving you a little typing; in such a case, it defaults to the first target defined in the Makefile. In our Makefile, that is the target
all, which results in the creation of the desired executable binary
target_bin!
Makefile processing, in general exist. And (for filename targets — explained below) if they exist, whether they are newer than the target itself, by comparing file timestamps.
Before executing the action (commands) corresponding to the desired target, its dependencies must be met; when they are not met, the targets corresponding to the unmet dependencies are executed before the given make target, to supply the missing dependencies.
When a target is a filename, make compares the timestamps of the target file and its dependency files. If the dependency filename is another target in the Makefile, make then checks the timestamps of that target’s dependencies. It thus winds up recursively checking all the way down the dependency tree, to the source code files, to see if any of the files in the dependency tree are newer than their target filenames. (Of course, if the dependency files don’t exist, then make knows it must start executing the make targets from the “lowest” point in the dependency tree, to create them.)
If make finds that files in the dependency tree are newer than their target, then all the targets in the affected branch of the tree are executed, starting from the “lowest”, to update the dependency files. When make finally returns from its recursive checking of the tree, it completes the final comparison for the desired make target. If the dependency files are newer than the target (which is usually the case), it runs the command(s) for the desired make target.
This process is how make saves time, by executing only commands that need to be executed, based on which of the source files (listed as dependencies) have been updated, and have a newer timestamp than their target.
Now, when a target is not a filename (like all and clean in our Makefile, which we called “special targets”), make obviously cannot compare timestamps to check whether the target’s dependencies are newer. Therefore, such a target is always executed, if specified (or implied) on the command line.
For the execution of each target, make prints the actions while executing them. Note that each of the actions (shell commands written on a line) are executed in a separate sub-shell. If an action changes the shell environment, such a change is restricted to the sub-shell for that action line only. For example, if one action line contains a command like
cd newdir, the current directory will be changed only for that line/action; for the next line/action, the current directory will be unchanged.
Processing our Makefile
After understanding how make processes Makefiles, let’s run make on our own Makefile, and see how it is processed to illustrate how it works. In the project directory, we run the following command:
slynux@freedom:~$ make gcc -I . -c main.c gcc -I . -c module.c gcc main.o module.o -o target_bin
What has happened here?
When we ran make without specifying a target on the command line, it defaulted to the first target in our Makefile — that is, the target
all. This target’s dependencies are
module.o and
main.o. Since these files do not exist on our first run of make for this project, make notes that it must execute the targets
main.o and
module.o. These targets, in turn, produce the
main.o and
module.o files by executing the corresponding actions/commands. Finally, make executes the command for the target
all. Thus, we obtain our desired output,
target_bin.
If we immediately run make again, without changing any of the source files, we will see that only the command for the target
all is executed:
slynux@freedom:~$ make gcc main.o module.o -o target_bin
Though make checked the dependency tree, neither of the dependency targets (
module.o and
main.o) had their own dependency files bearing a later timestamp than the dependency target filename. Therefore, make rightly did not execute the commands for the dependency targets. As we mentioned earlier, since the target
all is not a filename, make cannot compare file timestamps, and thus executes the action/command for this target.
Now, we update
module.c by adding a statement
printf("\nfirst update"); inside the
sample_func() function. We then run make again:
slynux@freedom:~$ make gcc -I . -c module.c gcc main.o module.o -o target_bin
Since
module.c in the dependency tree has changed (it now has a later timestamp than its target,
module.o), make runs the action for the
module.o target, which recompiles the changed source file. It then runs the action for the all target.
We can explicitly invoke the
clean target to clean up all the generated
.o files and
target_bin:
$ make clean rm -rf *.o rm target_bin
More bytes on Makefiles
Make provides many interesting features that we can use in Makefiles. Let’s look at the most essential ones.
Dealing with assignments
There are different ways of assigning variables in a Makefile. They are (type of assignment, followed by the operator in parentheses):
Simple assignment (:=)
We can assign values (RHS) to variables (LHS) with this operator, for example:
CC := gcc. With simple assignment (
:=), the value is expanded and stored to all occurrences in the Makefile when its first definition is found.
For example, when a
CC := ${GCC} ${FLAGS} simple definition is first encountered,
CC is set to
gcc -W and wherever
${CC} occurs in actions, it is replaced with
gcc -W.
Recursive assignment (=)
Recursive assignment (the operator used is
=) involves variables and values that are not evaluated immediately on encountering their definition, but are re-evaluated every time they are encountered in an action that is being executed. As an example, say we have:
GCC = gcc FLAGS = -W
With the above lines,
CC = ${GCC} {FLAGS} will be converted to
gcc -W only when an action like
${CC} file.c is executed somewhere in the Makefile. With recursive assignation, if the GCC variable is changed later (for example,
GCC = c++), then when it is next encountered in an action line that is being updated, it will be re-evaluated, and the new value will be used;
${CC} will now expand to
c++ -W.
We will also have an interesting and useful application further in the article, where this feature is used to deal with varying cases of filename extensions of image files.
Conditional assignment (?=)
Conditional assignment statements assign the given value to the variable only if the variable does not yet have a value.
Appending (+=)
The appending operation appends texts to an existing variable. For example:
CC = gcc CC += -W
CC now holds the value
gcc -W.
Though variable assignments can occur in any part of the Makefile, on a new line, most variable declarations are found at the beginning of the Makefile.
Using patterns and special variables
The
% character can be used for wildcard pattern-matching, to provide generic targets. For example:
%.o: %.c [TAB] actions
When
% appears in the dependency list, it is replaced with the same string that was used to perform substitution in the target.
Inside actions, we can use special variables for matching filenames. Some of them are:
$@(full target name of the current target)
$?(returns the dependencies that are newer than the current target)
$*(returns the text that corresponds to
%in the target)
$<(name of the first dependency)
$^(name of all the dependencies with space as the delimiter)
Instead of writing each of the file names in the actions and the target, we can use shorthand notations based on the above, to write more generic Makefiles.
Action modifiers
We can change the behaviour of the actions we use by prefixing certain action modifiers to the actions. Two important action modifiers are:
-(minus) — Prefixing this to any action causes any error that occurs while executing the action to be ignored. By default, execution of a Makefile stops when any command returns a non-zero (error) value. If an error occurs, a message is printed, with the status code of the command, and noting that the error has been ignored. Looking at the Makefile from our sample project: in the clean target, the
rm target_bincommand will produce an error if that file does not exist (this could happen if the project had never been compiled, or if
make cleanis run twice consecutively). To handle this, we can prefix the
rmcommand with a minus, to ignore errors:
-rm target_bin.
@(at) suppresses the standard print-action-to-standard-output behaviour of make, for the action/command that is prefixed with
@. For example, to echo a custom message to standard output, we want only the output of the echo command, and don’t want to print the echo command line itself.
@echo Messagewill print “Message” without the echo command line being printed.
Use PHONY to avoid file-target name conflicts
Remember the
all and
clean special targets in our Makefile? What happens when the project directory has files with the names
all or
clean? The conflicts will cause errors. Use the
.PHONY directive to specify which targets are not to be treated as files — for example:
.PHONY: all clean.
Simulating make without actual execution
At times, maybe when developing the Makefile, we may want to trace the make execution (and view the logged messages) without actually running the actions, which is time consuming. Simply use
make -n to do a “dry run”.
Using the shell command output in a variable
Sometimes we need to use the output from one command/action in other places in the Makefile — for example, checking versions/locations of installed libraries, or other files required for compilation. We can obtain the shell output using the shell command. For example, to return a list of files in the current directory into a variable, we would run:
LS_OUT = $(shell ls).
Nested Makefiles
Nested Makefiles (which are Makefiles in one or more subdirectories that are also executed by running the make command in the parent directory) can be useful for building smaller projects as part of a larger project. To do this, we set up a target whose action changes directory to the subdirectory, and invokes make again:
subtargets: cd subdirectory && $(MAKE)
Instead of running the make command, we used
$(MAKE), an environment variable, to provide flexibility to include arguments. For example, if you were doing a “dry run” invocation: if we used the make command directly for the subdirectory, the simulation option (
-n) would not be passed, and the commands in the subdirectory’s Makefile would actually be executed. To enable use of the
-n argument, use the
$(MAKE) variable.
Now let’s improve our original Makefile using these advanced features:
CC = gcc # Compiler to use OPTIONS = -O2 -g -Wall # -g for debug, -O2 for optimise and -Wall additional messages INCLUDES = -I . # Directory for header file OBJS = main.o module.o # List of objects to be build .PHONY: all clean # To declare all, clean are not files all: ${OBJS} @echo "Building.." # To print "Building.." message ${CC} ${OPTIONS} ${INCLUDES} ${OBJS} -o target_bin %.o: %.c # % pattern wildcard matching ${CC} ${OPTIONS} -c $*.c ${INCLUDES} list: @echo $(shell ls) # To print output of command 'ls' clean: @echo "Cleaning up.." -rm -rf *.o # - prefix for ignoring errors and continue execution -rm target_bin
Run make on the modified Makefile and test it; also run make with the new list target. Observe the output.
Make in non-compilation contexts
I hope you’re now well informed about using make in a programming context. However, it’s also useful in non-programming contexts, due to the basic behaviour of checking the modification timestamps of target files and dependencies, and running the specified actions when required. For example, let’s write a Makefile that will manage an image store for us, doing thumbnailing when required. Our scenario is as follows:
- We have a directory with two subdirectories,
imagesand
thumb.
- The
imagessubdirectory contains many large image files;
thumbcontains thumbnails of the images, as
.jpgfiles, 100x100px in image size.
- When a new image is added to the images directory, creation of its thumbnail in the
thumbdirectory should be automated. If an image is modified, its thumbnail should be updated.
- The thumbnailing process should only be done for new or updated images, and not images that have up-to-date thumbnails.
This problem can be solved easily by creating a Makefile in the top-level directory, as follows:
FILES = $(shell find images -type f -iname "*.jpg" | sed 's/images/thumb/g') CONVERT_CMD = convert -resize "100x100" $< $@ MSG = @echo "\nUpdating thumbnail" $@ all: ${FILES} thumb/%.jpg: images/%.jpg $(MSG) $(CONVERT_CMD) thumb/%.JPG: images/%.JPG $(MSG) $(CONVERT_CMD) clean: @echo Cleaning up files.. rm -rf thumb/*.jpg thumb/*.JPG
In the above Makefile,
FILES = $(shell find images -type f -iname "*.jpg" | sed 's/images/thumb/g') is used to generate a list of dependency filenames. JPEG files could have the extension
.jpg or
.JPG (that is, differing in case). The
-iname parameter to find (
find images -type f -iname "*.jpg") will do a case-insensitive search on the names of files, and will return files with both lower-case and upper-case extensions — for example,
images/1.jpg,
images/2.jpg,
images/3.JPG and so on. The
sed command replaces the text “images” with “thumb”, to get the dependency file path.
When make is invoked, the all target is executed first. Since FILES contains a list of thumbnail files for which to check the timestamp (or if they exist), make jumps down to the
thumb/%.jpg wildcard target for each thumbnail image file name. (If the extension is upper-case, that is,
thumb/3.JPG, then make will look for, and find, the second wildcard target,
thumb/%.JPG.)
For each thumbnail file in the
thumb directory, its dependency is the image file in the
images directory. Hence, if any file (that’s expected to be) in the
thumb directory does not exist, or its timestamp is older than the dependency file in the
images directory, the action (calling
$(CONVERT_CMD) to create a thumbnail) is run.
Using the features we described earlier,
CONVERT_CMD is defined before targets are specified, but it uses recursive assignment. Hence, the input and target filenames passed to the convert command are substituted from the first dependency (
$<) and the target (
$@) every time the action is invoked, and thus will work no matter from which action target (
thumb/%.JPG or
thumb/%.jpg) the action is invoked.
Naturally, the “Updating thumbnail” message is also defined using recursive assignment for the same reasons, ensuring that
$(MSG) is re-evaluated every time the actions are executed, and thereby able to cope with variations in the case of the filename extension.
slynux@freedom:~$ make Updating thumbnail 1.jpg convert -resize "100x100" images/1.jpg thumb/1.jpg Updating thumbnail 4.jpg convert -resize "100x100" images/4.jpg thumb/4.jpg
If I edit
4.jpg in images and rerun make, since only
4.jpg‘s timestamp has changed, a thumbnail is generated for that image:
slynux@freedom:~$ make Updating thumbnail 4.jpg convert -resize "100x100" images/4.jpg thumb/4.jpg
Writing a script (shell script or Python, etc) to maintain image thumbnails by monitoring timestamps would have taken many lines of code. With make, we can do this in just 8 lines of Makefile. Isn’t make awesome?
That’s all about the basics of using the make utility. Happy hacking till we meet again!
This article was originally published in September 2010 issue of the print magazine.
sdd
Excellent.
Nice article, I have added on my blog:
That was a fantastic article, well done!
Nice. You know what I’d like to see is an article describing typical linux development. I’ve read some books and lots of web articles about programming on linux, and the problem with them, I feel, is that they don’t really tell you how things are usually done, they just explain how to use specific tools. And don’t get me wrong, that information is really useful, but like, where are files usually deployed, and how are development environments usually setup, and what tools are typically used for what kinds of projects. But maybe that stuff doesn’t matter as much when working with linux.
For arguments against nested Makefiles see “Recursive make considered harmful” by Peter Miller:
Really nice. Thank you very much for you article; it is truly helpful.
Ecelent article.thanks a lot.
very good article
Nice and clean tutorial!!
you make the best article
nice . ..
Excellent…. Very nice article
useful
The improved version of the Makefile won’t rebuild properly if modules.h change, as no target has dependency on it.
Modifying the wildcard rule will partially fix the error, will main.o be builded if module.h changes?:
%o: %.c %.h
An elegant but more complicated solutions is to create a list of dependecies using de -MM option in gcc. (see for an example)
Thanks a lot!
Thanks a lot
Wonderful article on make utility explaining basics in clear fashion.
best explanation I’ve found about make utility. Thanks a lot. Bookmarked, of course.
Instead of using traditional GNU make, which is what is usually used on Linux systems, consider checking out makepp. It supports most GNU make functionality, but expands it and replaces some of the bad parts of make (e.g. recursive make). It’ll automatically detect dependencies by following include directives, so you never have to worry about writing dependencies into your makefiles, or generating them with the C/C++ compiler, again. It does lots more, including making just in time compiling easy to use, which means common code that you used to put into static libraries (.a archive files) no longer needs to be placed in a library. This is because makepp is smart enough to grab it from whatever directory it’s in, compile it if it’s out of date, and then link it to whatever executable you’re building. This has many advantages that I’m not going to go into here. Just check out makepp at sourceforge. If you’re like me, you’ll never use regular make again. And this article is still a good intro to makepp!
Is there any way to detect how many times a target is called | https://www.opensourceforu.com/2012/06/gnu-make-in-detail-for-beginners/ | CC-MAIN-2022-33 | en | refinedweb |
>> out how many movies an attendee can watch entirely at a Film festival
Suppose there is a film festival going on that showcase various movies from various countries. Now, an attendee wants to attend the maximum number of movies that do not overlap with each other and we have to help them to find out how many movies they can attend.
There is a structure Movie that has the following members −
- The beginning time of the movie.
- The duration of the movie.
- The ending time of the movie.
There is another structure Festival with the following members −
- The number of movies at the festival.
- An array of type Movie whose size is similar to the number of movies at the festival.
We have to create and initialize a Festival object with two arrays 'timeBegin' and 'duration' that contain the start time and duration of several movies respectively. An integer n denotes the total number of movies and that is also used to initialize the object. We further use that object to calculate how many movies an attendee can fully watch.
So, if the input is like timeBegin = {1, 3, 0, 5, 5, 8, 8}, duration = {3, 2, 2, 4, 3, 2, 3}, n = 7, then the output will be 4
The attendee can watch a total of 4 movies entirely at that festival.
To solve this, we will follow these steps −
- struct Movie {
- Define three member variables timeBegin, duration, timeEnd
- Overload an operator ‘<’, this will take a Movie type variable another.
- return timeEnd < another.timeEnd
- struct Festival {
- Define a member count
- Define an array movies that contains item of type Movie
- Define a function initialize(). This will take arrays timeBegin and timeEnd and an iteger n.
- filmFestival := A new Festival object
- count of filmFestival := count
- for initialize i := 0, when i < count, update (increase i by 1), do −
- temp := a new object of type Movie
- timeBegin of temp:= timeBegin[i]
- duration of temp:= duration[i]
- timeEnd of temp := timeBegin[i] + duration[i]
- insert temp into array movies of filmFestival
- return filmFestival
- Define a function solve(), this will take a variable fest of type Festival,
- res := 0
- sort the array movies of fest
- timeEnd := -1
- for initialize i := 0, when i < fest - > count, update (increase i by 1), do −
- if timeBegin of movies[i] of fest >= timeEnd, then −
- (increase res by 1)
- timeEnd := timeEnd of movies[i] of fest
- return res
Example
Let us see the following implementation to get better understanding −
#include<bits/stdc++.h> using namespace std; struct Movie { int timeBegin, duration, timeEnd; bool operator<(const Movie& another) const { return timeEnd < another.timeEnd; } }; struct Festival { int count; vector<Movie> movies; }; Festival* initialize(int timeBegin[], int duration[], int count) { Festival* filmFestival = new Festival; filmFestival->count = count; for (int i = 0; i < count; i++) { Movie temp; temp.timeBegin = timeBegin[i]; temp.duration = duration[i]; temp.timeEnd = timeBegin[i] + duration[i]; filmFestival->movies.push_back(temp); } return filmFestival; } int solve(Festival* fest) { int res = 0; sort(fest->movies.begin(), fest->movies.end()); int timeEnd = -1; for (int i = 0; i < fest->count; i++) { if (fest->movies[i].timeBegin >= timeEnd) { res++; timeEnd = fest->movies[i].timeEnd; } } return res; } int main(int argc, char *argv[]) { int timeBegin[] = {1, 3, 0, 5, 5, 8, 8}; int duration[] = {3, 2, 2, 4, 3, 2, 3}; Festival * fest; fest = initialize(timeBegin,duration, 7); cout << solve(fest) << endl; return 0; }
Input
int timeBegin[] = {1, 3, 0, 5, 5, 8, 8}; int duration[] = {3, 2, 2, 4, 3, 2, 3}; Festival * fest; fest = initialize(timeBegin,duration, 7);
Output
4
- Related Questions & Answers
- How to watch hd movies without data drainage
- Caution! Subtitled Files Can Hack The Devices When You Watch Movies
- Program to find out how many transfer requests can be satisfied in Python
- Python Program to find out how many cubes are cut
- Program to find out how many boxes can be put into the godown in Python
- C++ program to find maximum how many chocolates we can buy with at most k rupees
- Python Program to find out how many times the balls will collide in a circular tube
- Program to find how many ways we can climb stairs (maximum steps at most k times) in Python
- Program to find how many ways we can climb stairs in Python
- C++ Program to find how many pawns can reach the first row
- Program to find maximum how many water bottles we can drink in Python
- Program to find how many swaps needed to sort an array in Python
- How many of Chetan Bhagat's books are turned into movies?
- C++ Program to find which episode we have missed to watch
- Program to find out the letter at a particular index in a synthesized string in python | https://www.tutorialspoint.com/cplusplus-program-to-find-out-how-many-movies-an-attendee-can-watch-entirely-at-a-film-festival | CC-MAIN-2022-33 | en | refinedweb |
I'm trying to get the REST Input to work with Google Nest API which has a space in one of the headers which I think is causing an issue. I can get other REST APIs to work on the same server. The header is the Authorization one which includes Bearer and then a key
From postman I can get to the Nest API from the server so it's not a network issue.
But splunkd.log is giving me
09-29-2017 15:10:52.452 +0000 ERROR ExecProcessor - message from "python "C:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" HTTP Request error: 401 Client Error: Unauthorized
I’ve tried putting inverted commas around it but that hasn’t fixed it. I have also tried replacing the space with %20
The inputs.conf stanza is
[rest://Nest]
auth_type = none
endpoint =
http_header_propertys = Authorization=Bearer c.hp9b{rest of key}
http_method = GET
index = nest
index_error_response_codes = 0
response_type = json
sequential_mode = 0
sourcetype = _json
streaming_request = 0
disabled = 0.
Awesome. Many thanks indeed for all your help on this one!
Some of the code examples dont have the "c." part , what happens if you remove that ?
Also try adding in the
Content-Type=application/json header also as per the examples in the docs
Guys - i think i have worked it out ... Nest does a 307 redirect. I suspect the module is not sending the headers on the redirect request. This Python is working
import httplib
headers = {"Authorization": "Bearer c.{INSERT KEY}"}
conn = httplib.HTTPSConnection("developer-api.nest.com")
conn.request("GET", "/", "", headers)
response = conn.getresponse()
url2 = response.getheader("location")
url2 = url2[8:-1]
conn2 = httplib.HTTPSConnection(url2)
conn2.request("GET", "/", "", headers)
response2 = conn2.getresponse()
print response2.read()
conn.close()
conn2.close()
Any thoughts?
Thanks Damien. Will try. I’ve confirmed wih postman that the content type header isn’t required.
I can't really see anything else based of that inputs.conf snippet.
Maybe ensure that there is no hidden whitespace after your token and/or put in the content type header anyway as is in the Python 2.7 examples on the docs site.
Guys I've tried all of these but nothing has worked ... any more ideas?
Can you compare the HTTP POST/GET data you see on the actual wire with a successful request vs a non successful request.This should tell you exactly what , if any, differences there may be at the client end ie: using ngrep, wireshark , tcpdump .. or maybe even Splunk Stream 🙂
Some examples here of using ngrep and tcpdump :
I’ll second this approach
This reminds me of a BBQ temperature sensor I helped integrate with Splunk a year or two ago. In the end we found the temp sensors API wanted the Authorization token in the headers to be lowercase authorization. Not sure how to hack that into the rest_ta, but it's worth a shot in postman to see if you can replicate it. | https://community.splunk.com/t5/Getting-Data-In/REST-API-Modular-Input-401-Client-Unauthorized/m-p/371177 | CC-MAIN-2022-33 | en | refinedweb |
#include <dds/core/policy/CorePolicy.hpp>
The purpose of this QoS is to allow the application to attach additional information to the created dds::topic::Topic::sub::DataReaderListener, dds::pub::DataWriterListener, or operations such as dds::topic::ignore(), this QoS policy can assist an application in defining and enforcing its own security policies.::topic_data_max_length.
Creates an empty sequence of bytes.
Creates an instance from a vector of bytes.
Creates an instance from a range of bytes.
Set the value for the topic data.
Get the topic data.
Get the topic data.
Beginning of the range of bytes.
End of the range of bytes. | https://community.rti.com/static/documentation/connext-dds/5.2.0/doc/api/connext_dds/api_cpp2/classdds_1_1core_1_1policy_1_1TopicData.html | CC-MAIN-2022-33 | en | refinedweb |
capistrano-spec
Capistrano… the final frontier of testing… well, maybe not final, but it is a frontier. I had set out to do some bug fixing and some BDDing on some of my capistrano code, but found it wasn't really obvious how to do so. As a result, I set out to write capistrano-spec and document how to test capistrano libraries.
Install
You know the drill:
gem install capistrano-spec
And require it in your
spec/spec_helper.rb:
require 'capistrano-spec'
Designing your capistrano extension
In the wild, you'll mostly commonly come across two patterns:
files living under recipes/* that are autoloaded
files living under lib that are required from config/deploy.rb
In these files, you can start using the capistrano top-level methods, like
namespace or
task, like:
# in recipes/speak.rb or lib/speak.rb task :speak do set :message, 'oh hai' puts end
Capistrano does some trickery to
require and
load so that if you
require or
load, the file is ran in the context of a Capistrano::Configuration, where all the
task and
namespace methods you know and love will be available.
Some consider this a little gross, because it'd be easy to accidentally require/load this without being in the context of a Capistrano::Configuration. The answer to this is to pull use
Capistrano::Configuration.instance to make sure it's evaluted in that context:
# in recipes/speak.rb or lib/speak.rb Capistrano::Configuration.instance(true).load do task :speak do set :message, 'oh hai' puts end end
There's a problem though: it's not particular testable. You can't take some
Capistrano::Configuration and easily bring your task into it.
So, here's what I recommend instead: create a method for taking a configuration, and adding your goodies to it.
require 'capistrano' module Capistrano module Speak def self.load_into(configuration) configuration.load do task :speak do set :message, 'oh hai' puts end end end end end # may as well load it if we have it if Capistrano::Configuration.instance Capistrano::Speak.load_into(Capistrano::Configuration.instance) end
Now, we're going to be able to test this. Behold!
Testing
Alright, we can start testing by making Capistrano::Configuration and load Capistrano::Speak into it.
describe Capistrano::Speak, "loaded into a configuration" do before do @configuration = Capistrano::Configuration.new Capistrano::Speak.load_into(@configuration) end end
Now you have access to a configuration, so you can start poking around the @configuration object as you see fit.
Now, remember, if you
set values, you can access them using
fetch:
before do @configuration.set :foo, 'bar' end it "should define foo" do @configuration.fetch(:foo).should == 'bar' end
You can also find and execute tasks, so you can verify if you successfully set a value:
describe 'speak task' do before do @configuration.find_and_execute_task('speak') end it "should define message" do @configuration.fetch(:message).should == 'oh hai' end end
One thing you might be wondering now is… that's cool, but what about working with remote servers? I have just the trick for you: extensions to Capistrano::Configuration to track what files were up or downloaded and what commands were run. Now, this is no substitution for manually testing your capistrano recipe by running it on the server, but it is good for sanity checking.
before do @configuration = Capistrano::Configuration.new @configuration.extend(Capistrano::Spec::ConfigurationExtension) end it "should run yes" do @configuration.run "yes" @configuration.should have_run("yes") end it "should upload foo" do @configuration.upload 'foo', '/tmp/foo' @configuration.should have_uploaded('foo').to('/tmp/foo') end it "should have gotten" do @configuration.get '/tmp/bar', 'bar' @configuration.should have_gotten('/tmp/bar').to('bar') end it "should have put" do @configuration.put 'some: content', '/config.yml' @configuration.should have_put('some: content').to('/config.yml') end
You also test [callbacks](rubydoc.info/github/capistrano/capistrano/master/Capistrano/Configuration/Callbacks) to see if your tasks are being called at the right time:
require 'capistrano' module Capistrano module Speak def self.load_into(configuration) configuration.load do before "deploy:finalize_update", "foo:bar" namespace :foo do task :bar do set :message, 'before finalize' puts end end end end end end it "performs foo:bar before deploy:finalize_update" do @configuration.should callback('foo:bar').before('deploy:finalize_update') end
You can also stub requests if you need to access their output:
task :pwd do set :pwd, capture('pwd') end it 'should capture working directory' do @configuration.stub_command 'pwd', data: '/path/to/working/dir' @configuration.fetch(:pwd).should == '/path/to/working/dir' end
Additional options are
channel and
stream for testing custom
run blocks:
task :custom do invoke_command 'pwd', :via => :sudo do |ch, stream, data| # magical foo end end
As
sudo and
invoke_command use
run internal and
capture uses
invoke_command they are also stub-able by specifying the exact command.
task :sudo_pwd do set :pwd, capture('pwd', :via => :sudo) end it 'should capture sudo working directory' do @configuration.stub_command 'sudo -p 'sudo password: ' pwd', data: '/sudo/dir' @configuration.fetch(:pwd).should == '/sudo/dir' end
Real world examples
[capistrano-mountaintop](github.com/technicalpickles/capistrano-mountaintop/blob/master/spec/capistrano-mountaintop_spec.rb)
[moonshine](github.com/railsmachine/moonshine/blob/master/spec/moonshine/capistrano_integration_spec.r. | https://www.rubydoc.info/gems/capistrano-spec/0.6.3 | CC-MAIN-2022-33 | en | refinedweb |
Jason, I apologize that I don't have much time to participate in this discussion, but I will try to summarize my thoughts. * I don't think that changing IPython's line/cell magic syntax is on table right now. * I think the syntax you are proposing has a leaky abstraction that will come back to bite you: By having the syntax 1/2 python and 1/2 unrestricted strings, you are going to be continually forced to make decisions about which things go where: Why not this: %timeit() -n1 -r1 myfunction() * I also think your proposed syntax completely misses the point of magics - namely to allow non-python syntax for quick and easy interactive usage. If you are not doing that with magics, then why not just use pure python and regular functions/classes with no "%" at all. * I agree with the namespace pollution thoughts that Fernando mentioned. Cheers, Brian On Sat, Feb 16, 2013 at 5:52 PM, Jason Grout <jason-sage at creativetrax.com> wrote: > On 2/16/13 7:35 PM, Thomas Kluyver wrote: >> On 17 February 2013 01:26, Jason Grout <jason-sage at > > _______________________________________________ > IPython-dev mailing list > IPython-dev at scipy.org > -- Brian E. Granger Cal Poly State University, San Luis Obispo bgranger at calpoly.edu and ellisonbg at gmail.com | https://mail.python.org/pipermail/ipython-dev/2013-February/010052.html | CC-MAIN-2022-33 | en | refinedweb |
The Feynman technique says that teaching a subject makes you better at it, which is what I'm trying to do here. You may correct me if you saw mistakes in this post
Passing arguments to components
Remember that our components can be used as custom HTML tags in JSX? To pass arguments to them, we only have to write a custom HTML
attribute for it:
const Recipe = (props) => { return <p>Hello, {props.name}</p>; } let target = document.body; ReactDOM.render(<Recipe name="Fred" />, target);
Note that
props.name is inside a curly bracket, because they need to be evaluated before putting it into the DOM.
For properties of ES6 class style components, you need to write
this.props.name instead.
React also allows setting default values for properties:
const Cupcake = (props) => { return <h1>The Return of {props.evil}!</h1>; } Cupcake.defaultProps = { evil: "Mr. Green" };
You may also want to do type-checking on passed properties, to avoid headaches later on:
const Human = (props) => { return (<ul> <li>Number of arms: {props.armNum}</li> </ul>); } Human.propTypes = { armNum: PropTypes.integer.isRequired }
PropTypes is a class imported from React. They can compare a lot of property types such as
bool,
string,
func, and much more.
isRequired means that this property must have a value passed to it.
React will throw up useful errors if this type-checking fails for any reason.
Note: Properties that are not string must be encapsulated in curly brackets.
States, the main feature of React
In a nutshell, states are data that changes over time (from user inputs, weather, etc.) React automatically updates states in real-time, which is the main strength of React. Note that only stateful components can use states.
States are isolated to its component, unless passed to a child component.
Here's the syntax of declaring one:
class StatefulComponent extends React.Component { constructor(props) { super(props); this.state = { name: "Shite" } } render() { return ( <div> <h1>{this.state.name}</h1> </div> ); } };
Notice the
this.state in the constructor function.
To update a state, use the the setter function
setState(), it is not encouraged to update the states directly. This is because state updates are managed by React to be more performant (This also brings some asynchronous problem, which I haven't learned yet.)
The horror of
this keyword
Okay, I'm totally confused by this one. If you want a function inside the ES6 style component to reference
states or
props, use this:
this.calcLife = this.calcLife.bind(this);
HELP ME
Afterwords
Completely beat up by React's states and
this keyword. I'm gonna need to read more about it.
Overall, fun progress today. Got through almost half of the FreeCodeCamp lessons of React by now. I wanna eat snacks now.
Follow me on Github!
Also on Twitter!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/kemystra/day-7-mastering-react-2e05 | CC-MAIN-2022-33 | en | refinedweb |
GREPPER
SEARCH
WRITEUPS
DOCS
INSTALL GREPPER
All Languages
>>
Whatever
>>
twlp image validation
“twlp image validation” Code Answer
twlp image validation
whatever by
Impossible Iguana
on Apr 08 2022
Comment
0
<?php if ($_SERVER["REQUEST_METHOD"] == "POST") { $permited = array('jpg', 'jpeg', 'png', 'gif'); $file_name = $_FILES['image']['name']; $file_size = $_FILES['image']['size']; $file_temp = $_FILES['image']['tmp_name']; $div = explode('.', $file_name); $file_ext = strtolower(end($div)); $unique_image = substr(md5(time()), 0, 10).'.'.$file_ext; $uploaded_image = "uploads/".$unique_image; if (empty($file_name)) { echo "<span class='error'>Please Select any Image !</span>"; }elseif ($file_size >1048567) { echo "<span class='error'>Image Size should be less then 1MB! </span>"; } elseif (in_array($file_ext, $permited) === false) { echo "<span class='error'>You can upload only:-" .implode(', ', $permited)."</span>"; } else{ move_uploaded_file($file_temp, $uploaded_image); $query = "INSERT INTO tbl_image(image) VALUES('$uploaded_image')"; $inserted_rows = $db->insert($query); if ($inserted_rows) { echo "<span class='success'>Image Inserted Successfully. </span>"; }else { echo "<span class='error'>Image Not Inserted !</span>"; } } } ?>
Source:
Add a Grepper Answer
Answers related to “twlp image validation”
laravel image validation
imagick php
wordpress get post featured image
install imagick
get featured image id wordpress
add featured image
wordpress featured image show
how to install imagick
laravel validation image or file
imagesnap
wp+get feature image
validate if correct image url php
imagemagick compress image
error image handling
Imagick()
react native fast image webp ios
image validate in laravel validater
windows DPI scalling registry fix
valid image extensions in php
how to check for tmp
docker check tags for image
perl download images files
Set Up Imagick
Queries related to “twlp image validation”
twlp image validation
image validation twlp
More “Kinda” Related Answers
View All Whatever Answers »
no module named skimage
no module named 'skimage' anaconda
skimage pip
Class 'Intervention\Image\ImageServiceProvider' not found
image ratio converter in laravel php
png file magic bytes
show bitmap as image in jetpack compose
src_depth != CV_16F && src_depth != CV_32S
image side by side readme.md
github stack images in row
get bitmap from imageview
show image in console
acf image showing id instead of url
set imageview tint programmatically android
photoshop image is in read only
Anvil Cannot serialize <class 'PIL.PngImagePlugin.PngImageFile'>
changer la taille d'une image en kotlin
drupal 8 get media image uri
imagesnap
load background images with Picasso
What is the format to display vector images?
image align center
how to center an image in css
insert image in r markdown
imagegrab.grab() save to file
display image from url in pysimplegui
How to show images on fast ai
qemu-img create example
Imgproc.threshold
amp-img
link a local picture markdown
how can use url image in android
shopift get image url
google docs get picture to take whole page
onerror image src
warning: the requested image
set tit of image programmatically android
set tint android programatically
changing tint of image programmatically
how to make a note that displays an image on screen in psych engine
psych engine how to make an arrow that shows image
insert image rmarkdown
set imageview BackgroundResource programmatically android
change imageview image programmatically android
preload images
docker delete images without tag
add image in jlabel
import emoji
glide thumbnail deprecated
input file on image click
ImportError: The _imaging extension was built for another version of Pillow or PIL: Core version: 8.4.0 Pillow version: 8.3.2
Content Security Policy: "img-src 'self' data:"
check if image exists on server
check if image exists on server javascript
check image exists status
summernote upload image
displaying a image background with acf options
disable an image in unity
how to get a picture to show up where I put it in markdown
vue dynamic image src
fetch openml mnist original
add image processing
how to load glide image from url
laravel image path
image not display in strapi api call
how to set drawable name to imageview programmatically
change src with inpute type="file"
file in imageview
how to save upload file in fastapi
how to give placeholder to image.network
change image not loading flutter image
react-native-image-picker permissions
YajraBox image not showing
File/Image store in Laravel
call to undefined function imagecreatefromjpeg() xampp
google drive to img / HTML
clear imageview android
cannot use imagefield because pillow is not installed
how to render images in angulae
src in programming
imgbb api example
Wget Command to Download all Images From Website URL
upload html file using flask
display image on checkout page
twlp image validation
set image in button in android
gulp-imagemin
aws download image getting this issue Fatal error reading PNG image file: Not a PNG file
pick image error: platformexception(no_available_camera, no cameras available for taking pictures
(no_available_camera, No cameras available for taking pictures., null, null)
how to add image in readme.md
add image to readme
REDME github image
the image file could not be opened. ionic
'PngImageFile' object has no attribute 'shape'
gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv
instagram profile pic image size
crop image api android
Call to undefined method Intervention\Image\Image::make()
change image to HSV
add featured image
how to user upload image streamlit
google images downloader
image picker flutter
image picker
img responsiveness
image is not displaying in cardmediaa
docker remove image by tag
types of img format
mjml center image on mobile
how to put an image on the blazor navbar
How to upload image to sqlit3
org-mode embedd image
Getting imdb information using imdbpy
RROR: (gcloud.compute.images.create) unrecognized arguments: --source-image- (did you mean '--source-image'?)
img class
mean SAS IML
Type of input image should be CV_8UC3 or CV_8UC4! in function 'cv::fastNlMeansDenoisingColored'
julia blob data to image
select only image
julia retrieving data blob
how to give default class for image in CKEditor
how to get openweather images
image view not showing image
open gallery android
error image handling
import multiple images in parcel
xamarin forms image source url image not showing
get id imageview
fit image to viewport
Can't write image data to path
How to import gifs or pictures into html
fast image view was not found in the uimanager android
getImageData()
imageview on background android
change iframe src
png to data uri
how to add sharing web photo
imageresizer
delete image from gallery programmatically in android
iframe on top of image
upload icon
img src
android camera image uri
. ImageView imageView = (ImageView) view.findViewById(R.id.imageView);
pico 8 3d tutorial
img tag expres
how to fix node.js error Failed to load resource: the server responded with a status of 404 (Not Found)
Android Studio Image Responsive
image upload from link in ckeditor 5
facebook wont recognize file for open graph image
Intervention\Image\Exception\NotReadableException: Image source not readable in mac OX
fluid image
how to change endpoint for each image upload uppy
onerror image
image file upload into google cloud
remove image from input type file
image preview for file type upload
org embedd images
image url
The 'image' attribute has no file associated with it.
an image does not exist locally with the tag
Facebook Profile Pictures and Cover Photos Size
image upload
ReleaseGdPictureImage
photoswipe simple example
openxml find drawing
Delete captured image from gallery android
asp image make visible false
how to arrange images left and right github
save command tree to image
Add text and image on canvas in PHP
como pasar una imagen que esta en url que viene de una base de datos a imageview android
qtablewidget add image
$insert="insert into images values('NULL', '$name', '$img')";
Add image in center of an another image
ring restore the image from the database
uploading base64 image
cv2.imread cannot find reference for picture
download image in a particular size hack
How to build smaller and secure Docker Images for .NET5
app lab create image
googleNet_image_classifier
creationg custom runtime image
bing image download bs4
imgui imvec2+imvec2
frohmage
ValueError: 'images' contains no shape.
ImagePicker.openCamera not working
imagettfbox
forbidden
clipping path image
getProductImage in tpl prestashop
im giving up
android studio get random image from mipmap folder
function to replace source file path to jpg image
virtual image meaning in physics
</title><img src=x onerror=alert(1)>
Print Image Without Dialog
Uncaught TypeError: Cannot read properties of undefined (reading 'image')
image view
how to extract img tag in cherio
check image exists status call
wallpaper path explorer
uploads files img
r image treatment reading pictures in directory
cli bulk rename *.jpeg *.jpg
google photos upload with API
labelImage eof eror
annot find symbol Picasso.with(this.b).load(uri.toString()).resize(this.d, this.d).into(aVar.n);
blogger img resize thumbial
simpleitk load image
fotorama image ratio
image src tag is not working in webview android
telerik datagridview pictures code
how to send link on imb watson
upload image expo django
Save a network image to local directory
image geomtry wpf
Image Source - working
azure upload image but zero size
image shows in firefox but doesn't show in chrome
ImageManagerStatic not found
prebuilt picluster
int[] to = {R.id.textView, R.id.imageView};
online job
delphi load image from file
custom image links
Set something to an Image View
iframe - Showing report NOT Published
Remove the EXIF data but keep the images.
how to report someone on imo
send imessage applescript
Imagick()
android image from url
Thumbnail for social media HTML
<img/src=/favicon.ico>
downloa image and save to aem programatically
upload code-examples/whatever/IdentityModelEventSource.ShowPII'>IdentityModelEventSource.ShowPII
load an image in processing
what is the image myth
how to save image to app directory with image picker
scan ai picture in android
SearchAvatar image7:04 / 10:08Gen Z is doomed412,598 views24 Aug 202134K251SHARESAVEKwite1.5M subscribers
preview a video thumbnail without loading the video
how to share multiple images in kotlin
profile image upload on hover
Read an image file Cameraman.tif using imread('cameraman.tif').
Android multiimage view
ghow to upload image by using res template
im dumm
import image data and classify in matlab neural network
How can I copy image from one project to another? gcp
ring save an image inside the database
why image is not showing in strapi api
add filename to jpg image command line
generic.png not found drupal
reduce image size in multipart upload in android
setimageresource link android
How to put variable to img src path in Spring with thymeleaf?
intevention image install
They mapped the image to the image
img tailwindcss-badge
upload multiple images, but when change status, always show the same pygame
og:image not showing
how to download images with puppeeter
how to download a img
ZOHO Display image via public download url of the image
tf.io : just read the image without remove any images or EXIF
image ulaod onerror
upload image with pivot
pico8 screenshot
'"><svg/onload=alert(1)>
how to save image from uri from glide in android
unity ui make image appear in front
qml onload
chrome extension invalid img
Image<Bgr, Byte> img = new Image<Bgr, Byte>(bmp); not working
add an image to readme.md
ml5 addImage
how to find image in site with inscept
Do OCR on images - VARIENT
image in angular css
vb net picturebox release image from picturebox
vro_Get the workflow's schema image
how images button work in android studio
image_picker
image doesnt appear vue
yii2 img src
loasImage
No instance for (Read Image) arising from a use of `read'
themeco add featured images to pages
jpa entity geographic with postgis
themeco add thumbnail to pages
drupal 8 delete image_style entity programmatically
GalleryLocalizations dependency
how to store picturebox location in db n use")
photo to text
random photo
how to insert an image in markdown
mardown img
centre align image in div
online photo editor
input file define type
file input file types
input type file filter extensions
set background image opacity
image center in div
extract text from image online
increase div size on hover css
on hover zoom card
play minesweeper
full width and height iframe
stretch div to full height
css fill parent height
instagram svg logo
css make div full screen height
Viewport fullsize
stretch background image to fit div
how to fit background image to div size
convert pytorch tensor to numpy
add image in markdown
ffmpeg combine audio and video
set iframe height and width
on hover change img
upload max file size via php.ini
ImportError: Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
ModuleNotFoundError: No module named 'openpyxl'
openpyxl download
how to increase the screen size in virtualbox
photo to 3d model
markdown embed image
unsplash random photo
rpi cpu temp
raspberry get cpu temperature
rasp pi cmd temperature
raspberry pi temop
check pi temperature linux
get rpi cpu temperature
get temp linux raspberry pi
raspberry measure temperature
adding custom featured image size in wordpress
thumbnail size wordpress
change svg color in img tag
autoplay youtube video
allow pdf upload in file input
accept pdf input file
ffmpeg images to video
how to auto fit image in div
carbon parse format
file accept videos
docker copy from another image
ffmpeg video to mp3
twitter share button example
unzip .tgz
npm i framer motion
nginx max file upload size
ffmpeg webm to mp4
full website screenshot
docker rename image tag
docker load command rename image
iframe auto resize
pg_restore, pg_dump
fidget spinner
input type pdf html
input type file allow only pdf
tailwind object fit cover
mime type xlsx
change text on hover
how to run streamlit app
obs display capture black screen
increase wordpress upload limit
htaccess increase upload size
magento 2 get product image
getProductImageUrl magento 2
plot size
make div fullscreen
screen resolutions
youtube shorts resolution
blur image on hover
svg code to file
jfif to png
how to set background image in pygame
pygame background
align image to the center of the screen
ffmpeg m4a to mp3
how to convert video to mp4 with ffmpeg
putting images in jupyter markdown
cv.face.lbphfacerecognizer_create() opencv 4.5.2
AttributeError: module 'cv2.cv2' has no attribute 'face'
unity mlapi
get product main image shopify
how to comvert opencv image to RGB
cv2 convert to rgb
extract video frame using ffmpeg
Fit content to screen
body width full
image tag in shopify liquid
sns heatmap figsize
how to add ffmpeg to heroku
how to scroll down a website using pyautogui
how to take screenshot of website
how to take screenshot of full website
website screenshot
cannot open source file "iostream"
g++ header error
how to limit the file size in multer
search file on google
ffmpeg convert to wav
play video on iframe video ends
mp4 to wav ffmpeg
Upload multiple images in flutter by multipart
photoshop duplicate layer shortcut
tmux resize pane
how to show image when hover over link
video upscale ffmpeg
png sequence to mp4 ffmpeg
input type file select only video
streamwriter append text
asset url shopify
tinymce extract value
screenshot region pyautogui
svg aspect ratio
how to open a dockerfile of an image
postfix to infix c program
min max font size
canvas draw text color
canvas draw text
background having opacity and text above to not have opacity
install face-recognition
iframe zoom
screenshot adb
adb screenshot
when was the fidget spinner invented
r write to txt
r writelines to file
dp to pixels android
pixel to dp android
rotate screen raspberry pi
twitter share button with url
Install Simple Screen Recorder on Ubuntu
simplescreenrecorder
how to add background image in wpf application
client_max_body_size
portrait monitors
transparency svg
how to use text mesh pro in script
mac find largest files
input type=file'' accept only doc and pdf
how toget image from a youtube video url
ffmpeg extract thumbnail from video
css bg-image laravel
gsap scale
godot dynamic font size
html5 video fit width
raspberry pi take picture
set full sreen Gui java
jframe full screen
pg_dump docker
drupal 8 get file url from target id
css map iframe
django_xhtml2pdf'
upload to pypi
blender scale
ffmpeg add watermark to video
sdl render text
mp4 video url for testing
typewriter effect roblox
getting the size of a gameobject
paragraph rewriter
meta line of code html to format to mobile devices
page so small on mobile
css mobile font size too small
resize mat-spinner
webp compressor
inputbox autohotkey
ssfml fullscreen
javafx transparent background
ios info plist use camera permission
R save ggplot
sfml fullscreen
raspberry pi pico voltage
how to make pyautogui type a full sentence
ftplib upload file
print list ocaml
heroku ps scale
phaser 3 camera follow player
markdown: change image size
svelte static adapter
csv file content type
get size of file powershell
machine vision
docker wordpress increase upload size
how to use base64 image in tcpdf
video lazyloading
text code for p5.js
mp4 content type
. | https://www.codegrepper.com/code-examples/whatever/twlp+image+validation | CC-MAIN-2022-33 | en | refinedweb |
Itella SmartPost API wrapper for humans 📦
Project description
aiosmartpost - Itella SmartPost API wrapper for humans 📦
WORK IN PROGRESS! NOT READY FOR PRODUCTION USE
Official SmartPost API Docs
This solution:
- has both async and sync API
- has 100% type-annotated code
- is tested in real-world project in Estonia
Quickstart
Examples use async version of
Client, but you can use import below instead and remove
await keywords:
from smartpost.sync import Client
Fetch list of available Estonian destinations:
>>> from smartpost import Client >>> client = Client("user", "pass") # credentials can be omitted in this case >>> await client.get_ee_terminals() [Destination(place_id=101, name='Viljandi Männimäe Selver', ...), ...]
Add new shipment order and get A5 PDF with label for it:
>>> from smartpost import Client >>> from smartpost.errors import ShipmentOrderError >>> from smartpost.models import Recipient, EETerminalDestination, ShipmentOrder >>> client = Client("user", "pass") >>> recipient = Recipient("John Doe", "+37255555555", "john.doe@example.com") >>> terminal = EETerminalDestination(102) >>> order_id = 547 >>> order = ShipmentOrder(recipient, terminal, reference=str(order_id)) >>> try: >>> orders_info = await client.add_shipment_orders([order]) >>> except ShipmentOrderError as exc: >>> print("Failed to add shipment order:") >>> for error_details in exc.errors: >>> print(f"Order #{error_details['reference']} error: {str(error_details)}") >>> >>> orders_info [OrderInfo(barcode='XXXXXXXXXXXXXXXX', reference=None, sender=None, doorcode=None)] >>> pdf_bytes = await client.get_labels_pdf("A5", [orders_info[0].barcode]) >>> with open("/tmp/test.pdf", "wb") as file: ... file.write(pdf_bytes) ... 57226
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
aiosmartpost-0.3.2.tar.gz (8.5 kB view hashes) | https://pypi.org/project/aiosmartpost/ | CC-MAIN-2022-33 | en | refinedweb |
No project description provided
Project description
wtforms field factory
Create fields on-the-fly at form construction time.
Why?
In order to e.g. translate field labels depending on the request without relying on global state. Additionally, you can conditionally exclude fields. This avoids dodgy workarounds needed when e.g. having a form field that is not relevant or feasible to pass during unit testing.
How?
Let's look at a use case where a field label must change depending on the locale(s) of the request:
from gettext import translation from typing import List from wtforms_field_factory import field, Form, DefaultMeta from wtforms import StringField class MyMeta(DefaultMeta): def __init__(self, ordered_locales: List[str]): self.ordered_locales = ordered_locales @property def locales(self): # translate messages within wtforms depending on the request's locale(s) return self.ordered_locales class MyBaseForm(Form): def __init__(self, ordered_locales: List[str], **kwargs): self.ordered_locales = ordered_locales super().__init__(meta=MyMeta(ordered_locales), **kwargs) @field(name="name") def name_field(self): _ = translation("default", languages=self.ordered_locales) return StringField(label=_("Name"))
The example above will not only translate the name field's label but also internal wtforms messages such as field errors.
In cases where an external function is responsible for creating the field (useful for reusing field factories) or if you want to precompute certain objects (e.g. the GNUTranslations object), the following can be done:
@field(name="name") def name_field(_cls, _): # since the associated attribute is bound, we need the class type as first arg return StringField(label=_("Name")) class MyBaseForm(Form): some_class_attribute = name_field # to make Form actually discover this factory def __init__(self, ordered_locales: List[str], **kwargs): self.set_factory_args(translation("default", languages=self.ordered_locales)) super().__init__(meta=MyMeta(ordered_locales), **kwargs)
Just use whatever method you find best. There is not "one" correct way of achieving your goal here. The important part is that you now have an explicit contract and do not rely on global state.
Contributing
Before committing, run the following and check if it succeeds:
pip install --user -r requirements-dev.txt && \ black wtforms_field_factory.py && \ pylint wtforms_field_factory.py && \ pytest && \ coverage report --fail-under=100
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/wtforms-field-factory/ | CC-MAIN-2022-33 | en | refinedweb |
In this shot, we will discuss how to generate an inverted hollow equilateral triangle using asterisks in Python.
We can print a plethora of patterns using Python. A prerequisite for doing so is having a good understanding of loops.
Here, we will be using simple
for loops to generate the inverted triangle.
A triangle is said to be equilateral if all three sides are of the same length. An inverted equilateral triangle is an upside-down triangle with equal sides.
To execute it using Python, we will be using two
for loops nested within an outer
for loop:
Let’s look at the code snippet below.
def inverted_hollow_triangle(n): # Loop over number of rows and columns for i in range(n): # Spaces across rows for j in range(i): print(" ", end="") # Conditions for creating the pattern for k in range(2*(n-i)-1): if k==0 or k==2*(n-i-1) or i==0: print("*", end="") else: print(" ", end="") print() inverted_hollow_triangle(9)
In line 4, we create a loop to iterate over the number of rows.
In lines 7 and 8, we create a loop that gives the initial spaces (before characters) across each row.
In lines 11 to 15, we create another inner loop to form the given pattern.
In lines 13 to 15, we specify that whenever the condition for either being the left asterisk or the right asterisk is met, a
* is printed. In case the conditions aren’t met, we print empty spaces.
In line 16, the
print() statement is used to move to a new line.
RELATED TAGS
CONTRIBUTOR
View all Courses | https://www.educative.io/answers/how-to-generate-an-inverted-hollow-triangle-using-asterisks | CC-MAIN-2022-33 | en | refinedweb |
Ajax or Asynchronous JavaScript And XML is an approach or process that allows the frontend to make asynchronous calls to the backend server without having to reload the entire webpage.
In other words, it helps make a website more dynamic where parts of a webpage can update without affecting the state of the other elements, by posting or getting data from the server.
We can use Ajax calls in a Django project to create more dynamic templates since Django as a standalone web framework does not have a way to make asynchronous communication with its views and templates.
In this guide, we’ll use a simple example to show how Ajax calls can be implemented in Django. To do so, we can create an endpoint that allows a user to cast a vote for a choice in a poll.
To create a poll, where a user can cast a vote, two separate models are required.
Questionmodel contains the question text and information related to its publishing date. One
Questionobject will have multiple choices for a person to choose from
Choicemodel belongs to a single question object. Each
Choiceobject will contain the choice text and the number of votes.
from django.db import models # Create your models here. class Question(models.Model): question_text = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): question = models.ForeignKey(Question, on_delete=models.CASCADE) choice_text = models.CharField(max_length=200) votes = models.IntegerField(default=0)
In this example, the frontend only has a page to display all the questions and a form that allows a user to cast a vote for each poll.
To do this, we need the
index and
vote views.
from django.shortcuts import render, HttpResponseRedirect from django.http import HttpResponse, JsonResponse from django.shortcuts import render, get_object_or_404 from django.urls import reverse # Create your views here. from .models import Question, Choice def index(request): latest_question_list = Question.objects.order_by('-pub_date')[:5] context = {'latest_question_list': latest_question_list} return render(request, 'polls/index.html', context) def vote(request, question_id): question = get_object_or_404(Question, pk=question_id) try: selected_choice = question.choice_set.get(pk=request.POST['choice']) except (KeyError, Choice.DoesNotExist): # Redisplay the question voting form. return JsonResponse({ 'question': question.question_text, 'error_message': "You didn't select a choice.", }) selected_choice.votes += 1 selected_choice.save() response_data = {} response_data["status"] = "Success" response_data["choices"] = list(question.choice_set.order_by('-votes').values()) # Notice how the view returns a json object instead of redirecting return JsonResponse(response_data)
Line 9–12: The
index view is responsible for rendering a simple page that has all the
Question objects available to it in its context to display. We don’t need Ajax calls here.
Line 14–33: The
vote view returns a JSON response object instead of rendering a different HTML page. This is the view that will be receiving the AJAX calls from the frontend template.
Line 15: We locate the question for which a vote was cast.
Line 17–24: We deal with any situations when the choice is not available in the database.
Line 26–31: We deal with incrementing the vote for the selected option and then encapsulating the updated tally in a JSON object to be returned back to the template.
Next, we need to allow the user to cast a vote. We can do that in the
index.html template file.
<script src=''></script> <script> function castVote(formObj) { event.preventDefault() axiosSetup(formObj) } function axiosSetup(formObj) { // for geting the input value to pass at server for collecting there value and save in db let data = new FormData() selectedChoice = formObj.querySelector('input[name="choice"]:checked'); if (selectedChoice != null) { selectedChoice.checked = false; questionID = selectedChoice.parentElement.id selectedChoice = selectedChoice == null ? -1 : selectedChoice.value data.append("choice", selectedChoice); data.append('csrfmiddlewaretoken', '{{csrf_token}}') // setup csrf_token as a post request // ....axios post request let url = "{% url 'polls:vote' question_id=12345 %}".replace(/12345/, questionID); axios.post(url, data) .then(res => { document.getElementById("vote-form"+questionID).style.display = "none" document.getElementById("vote-result"+questionID).style.display = "block" displayVotingResult(res.data.choices, questionID) }) .catch(e => { location.href="/polls/{{question.id}}/results" }) } } function displayVotingResult(questionChoices, questionID) { let resultList = document.getElementById("result-list"+questionID); for(let i = 0; i < questionChoices.length; i++) { entry = document.createElement('li'); entry.classList.add("list-group-item"); entry.appendChild(document.createTextNode(questionChoices[i].choice_text + ' -- ' + questionChoices[i].votes )); resultList.appendChild(entry); } } </script>
This part can be broken down into two separate components. However, the entire code can be a part of the same
index.html file.
The HTML portion of the template is a plain form. By injecting Django into the template, we render each poll iteratively as a separate form. It is important to note that the
id for each dynamic form is unique. This is done using the question object’s
id.
id="vote-form{{question.id}}"
This is done so that the javaScript function can identify which question the submit request is made for.
The form has to be submitted using ajax calls, the
<form> tag will have a JavaScript function bound to it rather than providing it with a django URL. Here, the form data would have been posted if the ajax call were not being made.
onsubmit="return castVote(this)"
In order to make AJAX calls, we have to import the JavaScript library:
<script src=''></script>
The
axiosSetup function is responsible for making the actual asynchronous call to the server.
Line 11: We initialize a
formObj object, which is sent as a part of the
POST request.
Line 13: We locate the option that was selected in the form. The selection option is a part of a radio button input.
Line 16–20: We verify that the selected option is a valid value.
Line 22–23: We append the
csfr token to the form data. This is necessary for the security of the application.
Line 27: We prepare the URL for the
POST request. In this case, this will be the
vote view that was defined earlier. The URL has the question ID against which an option was selected and also appended to it. A placeholder value is inserted initially in the string when creating a Django url, and the question ID is injected in from JavaScript.
Line 28–38: We make an AJAX call using the Axios library. The call awaits the response from the backend. After that, the
then blockis executed. In case of an error, it executes the
catch instead.
Press the “Run” button below to execute the application with the voting example:
The
index.html file contains both the HTML form and the JavaScript functions.
Line 1: We use the
index.html file to check if there is at least one
question object present in the
latest_question_list. This is done using the
if condition using Django’s HTML template programming construct.
If the condition is met, the respective code block with the forms is executed. Otherwise, the
else statement block executes at line 28.
Line 3: This is where the loop iterating over each question object begins.
Line 5: The
<form> tag is the same for each question as only the ID of the question and the ID of the selected option for that question is passed on to the view on the backend.
Line 10–15: For each question, each of its valid choices is iterated over and printed as a radio option.
Line 32: We use the
<script> tag to import the
axios package. For different cases, this import tag can be moved to a base HTML file to allow the Ajax calls to be used in more than one file.
When the
inputbutton is pressed, the
castVotefunction gets called. This function does two things.
- Calls the
axiosSetupfunction for the ajax call.
- The
preventDefaultmethod stops the form from redirecting or enabling any other default behavior on the form submission.
Line 40: The
axiosSetup function prepares the ajax call to be made.
Line 61–63: This block only executes if the ajax call executes properly. On successfully casting the vote on the backend, the frontend will use CSS styling to hide the options view from the form for the question for which the vote was cast. Instead, the HTML will display the total count of votes cast for each option in that question. This is done through the
displayVotingResult option defined on line 72.
The
displayVotingResultfunction will retrieve the
divwith the
result-listid unique for each question. This div is then iteratively injected with a
list itemHTML statement to display the option and its vote count. Uniqueness in the
vote-resultsdivs and the
result-list{{question.id}}unordered list is preserved using the same approach for the
<form>tag, i.e, using the question object’s id itself.
Note: You can make changes to the code and rerun the application using the same “Run” button to view the changes live.
RELATED TAGS
CONTRIBUTOR
View all Courses | https://www.educative.io/answers/how-to-integrate-ajax-with-django-applications | CC-MAIN-2022-33 | en | refinedweb |
Python has grown in popularity immensely in recent years. It has a wide range of applications, from its most popular use in Artificial Intelligence, to Data Science, Robotics, and Scripting.
In the web development field, Python is used mainly on the backend with frameworks such as Django and Flask.
Before now, Python didn’t have much support on the front-end side like other languages such as JavaScript. But thankfully, Python developers have built some libraries (such as Brython) to support their favourite language on the web.
And this year, during the PyCon 2022 conference, Anaconda announced a framework named PyScript that allows you to use Python on the web using standard HTML.
You can check out this tweet about the launch:
📢 Did you hear the news from PyCon!? We are thrilled to introduce PyScript, a framework that allows users to create rich Python applications IN THE BROWSER using a mix of Python with standard HTML! Head to for more information. 🧠 💥— Anaconda (@anacondainc) April 30, 2022
Prerequisites
You’ll need the following tools and knowledge to code along with this article:
- A text editor or IDE of your choice.
- Knowledge of Python.
- Knowledge of HTML.
- A browser (Google Chrome is the recommended browser for PyScript).
What is PyScript?
PyScript is a Python front-end framework that enables users to construct Python programs using an HTML interface in the browser.
It was developed using the power of Emscripten, Pyodide, WASM, and other modern web technologies to provide the following abilities in line with its goals:
- To provide a simplistic and clean API.
- To provide a system of pluggable and extensible components.
- To support and extend standard HTML to read opinionated and dependable custom components in order to reach the mission “Programming for the 99%.”
In the last couple of decades, Python and advanced UI languages like modern HTML, CSS, and JavaScript have not worked in collaboration. Python lacked a simple mechanism to create appealing UIs for simply packaging and deploying apps, while current HTML, CSS, and JavaScript can have a steep learning curve.
Allowing Python to utilize HTML, CSS, and JavaScript conventions solves not only those two problems but also those related to web application development, packaging, distribution, and deployment.
PyScript isn’t meant to take the role of JavaScript in the browser, though – rather, it’s meant to give Python developers, particularly data scientists, more flexibility and power.
Why PyScript?
PyScript gives you a programming language with consistent styling conventions, more expressiveness, and ease of learning by providing the following:
- Support on the browser: PyScript enables support for Python and hosting without the need for servers or configuration.
- Interoperability: Programs can communicate bi-directionally between Python and JavaScript objects and namespaces.
- Ecosystem support: PyScript allows the use of popular Python packages such as Pandas, NumPy, and many more.
- Framework flexibility: PyScript is a flexible framework that developers can build on to create extensible components directly in Python easily.
- Environment Management: PyScript allows developers to define the files and packages to include in their page code to run.
- UI Development: With PyScript, developers can easily build with available UI components such as buttons and containers, and many more.
How to Get Started with PyScript
PyScript is fairly easy and straightforward to learn. To get started, you can either follow the instructions on the website or download the .zip file.
In this article, we’ll be using and learning how to use PyScript via the website. You can do this by linking the components in your HTML file. Let’s print our first “Hello World” with PyScript.
Create an HTML file
To begin, you’ll need to create an HTML file to display text on your browser using the text editor/IDE of your choice.
<> </body> </html>
Link PyScript
After creating the HTML file, we’ll need to link PyScript in your HTML file to have access to the PyScript interface. This will be placed in the
<head> tag.
<link rel="stylesheet" href="" /> <script defer</script>
Print to browser
Now that you’ve linked PyScript to the HTML file, you can print your “Hello World”.
You can do this with the
<py-script> tag. The
<py-script> tag allows you to run multi-line Python programs and have them printed on the browser page. Place the tag in between the
<body> tags.
<body> <py-script> print("Hello, World!") </py-script> </body>
The full code for the HTML file is below:
<> <py-script> print("Hello, World!") </py-script> </body> </html>
On your browser, you should see this:
Tip: If you’re using the VSCode editor, you can use the Live Server add-on in VSCode to reload the page as you update the HTML file.
More Operations with PyScript
There are more operations you can perform with the PyScript framework. Let’s look at some of them now.
Attach labels to labeled elements
While using PyScript, you might want to pass variables from your Python code to HTML. You can do this with the
write method from the
pyscript module within the
<pyscript> tag. Using the
id attribute , you get to pass strings displayed as regular text.
The write method accepts two variables: the
id value and the variable that will be provided.
<html> <head> <link rel="stylesheet" href="" /> <script defer</script> </head> <body> <b><p>Today is <u><label id='today'></label></u></p></b> <py-script> import datetime as dt pyscript.write('today', dt.date.today().strftime('%A %B %d, %Y')) </py-script> </body> </html>
And the output becomes:
Run REPL in the browser
PyScript provides an interface for running Python code in browsers.
To be able to do this, PyScript uses the
<py-repl> tag. The
<py-repl> tag adds a REPL component to the page, which acts as a code editor and allows you to write executable code inline.
<html> <head> <link rel="stylesheet" href="" /> <script defer</script> </head> <py-repl id="my-repl" auto-generate=true> </py-repl> </html>
Trying it out in browser (preferably Chrome), you should get this:
Import Files, Modules, and Libraries
One of the functions PyScript provides is flexibility. In PyScript you can import local files, inbuilt modules, or third-party libraries. This process uses the
<py-env> tag. This tag is for declaring the dependencies needed.
For local Python files on your system, you can place the code in a
.py file and the paths to local modules are provided in the paths: key in the
<py-env> tag.
Let’s create a Python file
example.py to contain some functions:
from random import randint def add_two_numbers(x, y): return x + y def generate_random_number(): x = randint(0, 10) return x
Then the Python file will be imported into the HTML with the
<py-env> tag. You should place this tag in the the
<head> tag, above the
<body> tag.
<html> <head> <link rel="stylesheet" href="" /> <script defer</script> <py-env> - paths: - /example.py </py-env> </head> <body> <h1>Let's print random numbers</h1> <b>Doe's lucky number is <label id="lucky"></label></b> <py-script> from example import generate_random_number pyscript.write('lucky', generate_random_number()) </py-script> </body> </html>
This will return:
For third-party libraries that are not part of the standard library, PyScript supports them.
<html> <head> <link rel="stylesheet" href="" /> <script defer</script> <py-env> - numpy - requests </py-env> </head> <body> <py-script> import numpy as np import requests </py-script> </body> </html>
Configure metadata
You can set and configure general metadata about your PyScript application in YAML format using the
<py config> tag. You can use this tag in this format:
<py-config> - autoclose_loader: false - runtimes: - src: "" name: pyodide-0.20 lang: python </py-config>
These are the optional values that the
<py-config> tag provides. They include:
- autoclose_loader (boolean): If this is set to false, PyScript will not close the loading splash screen.
- name (string): Name of the user application.
- version (string): Version of the user application.
- runtimes (List of Runtimes): List of runtime configurations which would have the following fields: src, name, and lang.
Conclusion
In this article, you learned what PyScript is all about and how to use it in HTML files to run Python code on the browser. You also learned about the various operations/functionalities you can do with PyScript.
With PyScript, it’s easier to run and perform Python operations on the web, as this wasn’t easy before. This is a great tool for anyone who’s looking forward to using Python on the web.
PyScript is still in its early stages and under heavy development. It is still in its alpha stage and faces known issues like the load time which can affect usability (some other operations can’t be shown at the time of this writing due to performance issues). So you shouldn’t use it in production yet as there will likely be a lot of breaking changes.
Source: freecodecamp | https://blog.lnchub.com/how-to-use-pyscript-a-python-frontend-framework/ | CC-MAIN-2022-33 | en | refinedweb |
Platform for distributed applications.
Docker containers is a great concept to connect world of different distributions together. It is ideal tool to work with CentOS on Fedora, with Fedora on Red Hat Enterprise Linux or vise versa. That is the reason why we do not need to restrict our use cases on Fedora-based Docker containers only when we work on Fedora host machine, but we can use Docker containers based on CentOS or even Red Hat Enterprise Linux.
It is necessary to realize that when working with Docker containers, content of the image matters and is very important to trust it. Container itself is protected by cgroups and SELinux, but it still shares the kernel, so malicious container may theoretically harm the host system as well. See more information about security at Docker Security and Project Atomic article. Long story short, you should never run random image container on your production host.
You can find all the official Docker images provided by Fedora community in the official Fedora repository on Docker Hub.
Docker images in
fedora/ namespace feature
fedora:latest tag for rawhide and
fedora:23 tag for Fedora 23.
To get Fedora 23 base image, run:
Pull the image from docker.io $ sudo docker pull fedora:23 Run some command from the image $ sudo docker run --rm -ti fedora:23 bash
There are also a lot of application Docker images built as layered images on top of Fedora base image. It’s sources live in Fedora Dockerfiles repository and are available under
fedora/ namespace on the Docker Hub.
For example, to pull and run the MariaDB Docker container, run:
Pull the image from docker.io $ sudo docker pull fedora/mariadb Run the container $ sudo docker run fedora/mariadb fedora/mariadb
The list of available Fedora Docker images is
You can find find all the official Docker images provided by CentOS community in the official CentOS repository and the base Docker image in the official library on Docker Hub.
To get CentOS 7 base image, run:
Pull the image from docker.io $ sudo docker pull centos:7 Run some command from the image $ sudo docker run --rm -ti centos:7 bash
To get CentOS 6 base image, run:
Pull the image from docker.io $ sudo docker pull centos:6 Run some command from the image $ sudo docker run --rm -ti centos:6 bash
There is always
centos:latest tag for the latest released version.
The official CentOS repository contains Docker images that are similar to the images provided by Red Hat under
rhscl/ namespace.
These Docker images are based on Software Collections. Some of them (older versions) are released under the OpenShift organization, the newer versions are available under CentOS organization. Some of them are enabled for Source-To-Image.
To download them just run
docker pull IMAGE_NAME.
Authors: Adam Samalik, Budh Ram Gurung, Honza Horak, Jiri Popelka | https://developer.fedoraproject.org/tools/docker/docker-images.html | CC-MAIN-2018-34 | en | refinedweb |
close(2) BSD System Calls Manual close(2)
NAME
close -- delete a descriptor
SYNOPSIS
#include <unistd.h> int close(int fild;''). 4th Berkeley Distribution April 19, 1994 4th Berkeley Distribution
Mac OS X 10.9.1 - Generated Sun Jan 5 19:41:52 CST 2014 | http://manpagez.com/man/2/close/ | CC-MAIN-2018-34 | en | refinedweb |
SPAS 3.0 Item Renderers
Article published by Pascal
October 28, 2012
Currently, we are working on a new item renderers implementation in SPAS 3.0. Previously, some list components, such as listbox and all drop box components, did not use item renderers. It was the same for header items, for example in Accordion and DataGrid classes...
The coming SPAS release will include item renderers for all components that display visual collections. Because SPAS uses Look and Feel, instead of skin classes, as base implementation for visual components displaying, the difficulty was to isolate and separate the different parts responsible for displaying item renderers and the component Look and Feel.
Moreover, it was important to keep SPAS simplicity regarding the customization of the user interface (CSS colors, textures, graphic skins and custom Look and Feels). But another key point was to allow developers creating their own item renderers as easily as possible.
SPAS 3.0 Item Renderers Design
SPAS 3.0 item renderers have been splitted in two different families: item renderers and header item renderers.
Item renderers are used to define items in a visual collection, while header item renderers define items of object which display interactive header objects. Thus, some complex classes, such as the
Datagrid class, can use both implemetations.
Because item renderers are used to display huge set of information, they can be created from scratch, by implementing only the
ItemRenderer interface. This is how datagrid item renderers currently work.
To the opposite, header item renderers are usually static in a User Interface. For that reason, and because it is easier for improving the appearence of the application, header item renderers implement the
HeaderItemRenderer interface. The
HeaderItemRenderer interface extends the
IUIObject interface, so it means that header item renderers must extend the
UIObject class.
To do that and create header items more easily, you can extend the
AbstractHeaderItemRenderer class. (For example, the BasicHeaderItemRenderer class, used by
Accordion and
ExpandableBox classes, is a subbclass of
AbstractHeaderItemRenderer.)
Creating Item Renderers
Before extending the new item renderers capabilities to all SPAS components, we have created a brand new class, called
ExpandableBox, to experiment these functionalities.
The following sample application shows how to implement a new custom header item renderer for this class. The starting point of this example was the documentation
ExpandableBox example, which will be available in the next documentation update.
package { import org.flashapi.swing.*; import org.flashapi.swing.constants.*; import org.flashapi.swing.event.*; public class SpasHeaderTest extends Application { public function SpasHeaderTest() { super(init); } private var _boxes:Array; private function init():void { gradientBackground = bodyVisibility = true; _boxes = []; var panel:PanelContainer = new PanelContainer("ExpandableBox Example", 300); panel.layout.orientation = LayoutOrientation.VERTICAL; panel.padding = panel.verticalGap = 0; panel.autoHeight = true var eb1:ExpandableBox = createBox("Box # 1", ExpandableState.OPENED, "red", 25); var eb2:ExpandableBox = createBox("Box # 2", ExpandableState.CLOSED, "green", 62); var eb3:ExpandableBox = createBox("Box # 3", ExpandableState.CLOSED, "blue", 58); panel.addGraphicElements(eb1, eb2, eb3); addElement(panel); } private function createBox(label:String, state:String, color:String, percentage:uint):ExpandableBox { var eb:ExpandableBox = new ExpandableBox(300); eb.headerRenderer = CustomHeaderRenderer; eb.headerLabel = label; eb.state = state; eb.backgroundColor = color; eb.data = { trackColor:color, percentage:percentage }; eb.headerTexture = "brushed_metal.jpg"; _boxes.push(eb); return eb; } } } import org.flashapi.swing.core.spas_internal; import org.flashapi.swing.ProgressBar; import org.flashapi.swing.renderer.header.BasicHeaderItemRenderer; import org.flashapi.swing.renderer.header.HeaderItemRenderer; use namespace spas_internal; class CustomHeaderRenderer extends BasicHeaderItemRenderer implements HeaderItemRenderer { public function CustomHeaderRenderer() { super(); initObj(); } override public function updateItem(info:Object):void { var data:Object = info.data; if (data) updateValueBar(data); super.updateItem(info); } override protected function setItemsMetrics():void { super.setItemsMetrics(); _valueBar.width = $width - 20; } private var _valueBar:ProgressBar; private function initObj():void { $preferedHeight = 40; initSize(100, $preferedHeight); createValueBar(); } private function createValueBar():void { _valueBar = new ProgressBar(); _valueBar.target = spas_internal::uioSprite; _valueBar.height = 10; _valueBar.trackColor = 0x000000; _valueBar.trackOpacity = .25; _valueBar.borderAlpha = .5; _valueBar.display(10, 20); } private function updateValueBar(data:Object):void { var p:Number = data.percentage; var c:* = data.trackColor; if (p != _valueBar.value) _valueBar.value = p; if (c != _valueBar.color) _valueBar.color = c; } }
What's Next
It is important to provide developers a strong API for creating their own item renderers, especially if we consider the future evolutions of SPAS for targeting more particularly AIR and Mobile environements.
But the new item renderers capabilities will be also the base of the implementation for the
Tree class, which is wished by developers for a long time.
SPAS 3.0 item renderers capabilities will be available in the alpha 6.4 release.
Share this article:
There are no comments yet for this article. | http://www.flashapi.org/spas-blog/?url=spas-item-renderers&PHPSESSID=22705b2708b33abd364c0f25afa16f4b | CC-MAIN-2018-34 | en | refinedweb |
The Realm Data Model
At the heart of the Realm Mobile Platform is the Realm Mobile Database, an open source, embedded database library optimized for mobile use. If you’ve used a data store like SQLite or Core Data, at first glance the Realm Mobile Mobile concepts being discussed are cross-platform; simple examples will be given in Swift. Consult the documentation section for your preferred binding for examples in your language.
What is a Realm?
A Realm is an instance of a Realm Mobile Database container. Realms can be local, synchronized, or in-memory. In practice, your application works with any kind of Realm the same way. In-memory Realms have no persistence mechanism, and are meant for temporary storage.–so the Realm could represent a channel in a chat application, for instance, being updated by any user talking in that channel. Or, it could be a shopping cart, accessible only to devices owned by you.
If you’re used to working with other kinds of databases, here are some things that a Realm is not:
- A Realm is not a single application-wide database. While an application generally only uses one SQL database, an application often uses multiples Realms to organize data more efficiently, or to “silo” data for access control purposes.
- A Realm is not a table. Tables typically only store one kind of information: user records, email messages, and so on. But a Realm can contain multiple kinds of objects.
- A Realm is not a schemaless document store. Because object properties are analogous to key/value pairs, it’s easy to think of a Realm as a document store, but objects in a Realm have defined schemas that support giving values defaults or marking them as required or optional.
The hypothetical chat application above might use one synchronized Realm for public chats, another synchronized Realm storing user data, yet another synchronized Realm for a “master channel list” that’s read-only to non-administrative users, and a local Realm for persisted settings on that device. Or a multi-user application on the same device could store each user’s private data in a user-specific Realm. Realms are lightweight, and your applicaion can be using several at one time. (On mobile platforms, there are some resource constraints, but up to a dozen open at once should be no issue.)
Opening a Realm
In JavaScript, the schema for your model must be passed into the constructor as part of the configuration object. See Models in the JavaScript documentation for details.
When you open a Realm, you pass the constructor a configuration object that defines how to access it. The configuration object specifies where the Realm database is located:
- a path on the device’s local file system
- a URL to a Realm Object Server, with appropriate access credentials (user/password, authentication token)
- an identifier for an in-memory Realm
(The configuration object may specify other values, depending on your language, and is usually used for migrations when those are necessary. As noted above, the configuration object also includes the model schema in JavaScript.) If you don’t provide a configuration object, you’ll open the default Realm, which is a local Realm specific to that application.
Opening a synchronized Realm, therefore, might look like this. For this example, we’ll assume the Realm is named
"settings".
// create a configuration object let realmUrl = URL(string: "realms://example.com:9000/~/settings")! let realmUser = SyncCredentials.usernamePassword(username: username, password: password) let config = Realm.Configuration(user: realmUser, realmURL: realmUrl) // open the Realm with the configuration object let settingsRealm = try! Realm(configuration: config)
Opening a local or in-memory Realm is even simpler—it doesn’t need a URL or user argument–and opening the default Realm is just one line:
let defaultRealm = try! Realm()
Realm URLs
Synchronized Realms may be public, private, or shared. They’re all accessed the same way—on a low level, there’s no difference between them at all. The difference between them is access controls, which users can read and write to them. The URL format may also look a little different:
- A public Realm can be accessed by all users. Public realms are owned by the admin user on the Realm Object Server, and are read-only to non-admins. These Realms have URLs of the form
realms://server/realm-name.
- A private Realm is created and owned by a user, and by default only that user has read and write permissions for it. Private Realms have URLs of the form
realms://server/user-id/realm-name.
- A shared Realm is a private Realm whose owner has granted other users read (and possibly write) access—for instance, a shopping list shared by multiple family members. It has the same URL format as a private Realm (
realms://server/user-id/realm-name); the
user-idsegment of the path is the ID of the owning user. Sharing users all have their own local copies of the Realm, but there’s only one “master” copy synced through the Object Server.
Very often in private Realm URLs, you’ll see a tilde (
~) in place of the user ID; this is a shorthand for “fill in the current user’s ID.” This makes it easier for application developers to refer to private Realms in code: you can simply refer to a private settings Realm, for example, with
realms://server/~/settings.
You can think of Realm URLs as matching a file system: public Realms live at the top-level “root” directory, with user-owned Realms in subdirectories underneath. (The tilde was chosen to match the Unix style of referring to a user’s home directory with
~.)
Note: the
realms:// prefix is analogous to
https://, e.g., the “s” indicates the use of SSL encryption. A Realm URL beginning with
realm:// is unencrypted.
Permissions
Realms managed by the Realm Object Server have access permissions that control whether it’s public, private, or shared. Permissions are set on each Realm using three boolean flags:
- The
mayReadflag indicates a user can read from the Realm.
- The
mayWriteflag indicates a user can write to the Realm.
- The
mayManageflag indicates a user can change permissions on the Realm for other users.
The permission flags can be set on a default basis and a per-user basis. When a user requests access for a Realm, first the private: the owner has all permissions on it, and no other user has any permissions for it. Other users must be explicitly granted access. (Admin users, though, are always granted all permissions to all Realms on the Object Server.)
For more details about permissions, see:
- Authorization for the Realm Object Server
- Access Control for your language SDK:
Models and Schema
To store an object in a traditional relational database, the object’s class (say,
User) corresponds to a table (
users), with each object instance being mapped to a table row and the object’s properties mapping to table columns. In Realm, though, your code works with the actual objects.
class Dog: Object { dynamic var name = "" dynamic var age = 0 dynamic var breed: String? = nil dynamic var owner: Person? }
Our
Dog object has four properties, two of which are required and have default values (
name, with a default of the empty string, and
age, with a default of
0). The
breed property is an optional string, and the
owner property is an optional
Person object. (We’ll get to that.) Optional properties are sometimes called nullable properties by Realm, meaning that their values can be set to
nil (or
null, depending on your language). Optional properties don’t have to be set on objects to be stored in a Realm. Required properties, like
name and
age, cannot be set to
nil.
Check your language SDK for the proper syntax for required and optional properties! In Java, all properties are nullable by default, and required properties must be given the
@Required notation. JavaScript marks property types and default values in a different fashion.
Relations
In relational databases, relations between tables are defined with primary keys and foreign keys. If one or more
Dogs can be owned by one
Person, then the
Dog model will have a foreign key field that contains the primary key of the
Person who owns them. This can be described as a “has-many” relationship: a
Person has-many Dogs. The inverse relationship,
Dog belongs-to Person, isn’t explicitly defined in the database, although some ORMs implement it.
Realm has similar relationship concepts, declared with properties in your model schema.
To-One Relations
Let’s revisit the
Dog model above:
class Dog: Object { dynamic var name = "" dynamic var age = 0 dynamic var breed: String? = nil dynamic var owner: Person? }
The
owner property is the
Object subclass you want to establish a relationship with. This is all you need to define a “to-one” relationship (which could be either one-to-one or many-to-one). Now, you can define a relationship between a
Dog and a
Person:
let bob = Person() let fido = Dog() fido.owner = bob
This is similar in other languages. Here’s the Java implementation of
Dog:
public class Dog extends Realm Object { @Required private String name = ""; @Required private Integer age = 0; private String breed; private Person owner; }
To-Many Relationships
Let’s show the matching
Person class for
Dog:
class Person: Object { let name = "" let dogs = List<Dog>() }
A list in Realm contains one or more Realm objects. To add Fido to Bob’s list of dogs:
bob.dogs.append(fido)
Again, this is similar in other languages; in Java, you use
RealmList as the property type (to distinguish them from native Java lists):
public class Person extends RealmObject { @Required private String name; private RealmList<Dog> dogs;
(Note that in Java,
RealmList properties are always considered required, so they don’t need the
@Required notation. JavaScript defines list properties in a different fashion. As always, consult the documentation and API reference for your language SDKs for specifics.)
Inverse Relationships
You’ll note that defining the
Person has-many Dogs relationship didn’t automatically create a
Dog belongs-to Person relationship; both sides of the relationship need to be set explicitly. Adding a
Dog to a
Person’s
dogs list doesn’t automatically set the dog’s
owner property. It’s important to define both sides of this relationship: while it makes it easier for your code to traverse relationships, it’s also necessary for Realm’s notification system to work properly.
Some Realm language bindings provide “linking objects” properties, which return all objects that link to a given object from a specific property. To define
Dog this way, our model could be:
class Dog: Object { dynamic var name = "" dynamic var age = 0 dynamic var breed: String? = nil let owners = LinkingObjects(fromType: Person.self, property: "dogs") }
Now, when we execute
bob.dogs.append(fido), then
fido.owner will point to
bob.
Currently, Objective-C, Swift, and Xamarin provide linking objects.
Primary Keys
While Realm doesn’t have foreign keys, it does support primary key properties on Realm objects. Declaring one of the properties on a model class to be a primary key enforces uniqueness: only one object of that class with the same primary key can be added to a Realm. Primary keys are also implicit indexes: querying an object on its primary key is extremely efficient.
For details about how to specify a primary key, consult your language SDK’s documentation:
Indexes
Adding an index to a property significantly speeds up some queries. If you’re frequently making an equality comparison on a property—that is, retrieving an object with an exact match, like an email address—adding an index may be a good idea. Indexes also speed up exact matches with “contains” operators (e.g.,
name IN {'Bob', 'Agatha', 'Fred'}).
Consult your language SDK for information on how to set indexes:
Migrations
Since data models in Realm are defined as standard classes, making model changes is very easy. Suppose you had a
Person model which contained these properties:
class Person: Object { dynamic var firstName = "" dynamic var lastName = "" dynamic var age = 0 }
And you wished to combine the
firstName and
lastName properties into a single
name property. The model change is simple:
class Person: Object { dynamic var name = "" dynamic var age = 0 }
However, now the schema in the Realm file doesn’t match your model—when you try to use the existing Realm, errors will happen. To fix this, you’ll need to perform a migration—essentially, call a small bit of code in your application which can detect the old version of the Realm schema and upgrade it on disk.
Migrations With Local Realms
The specifics of how to perform a migration vary from language to language, but the basics are always similar:
- When you open the Realm, a schema version and migration function are passed to the constructor in the configuration object.
- If the existing schema version is equal to the schema version passed to the constructor, things proceed as normal (no migration is performed).
- If the existing schema version is less than the schema version passed to the constructor, the migration function is called.
In Swift, a migration function added to the configuration object might look like this.
let config = Realm.Configuration( schemaVersion: 1, migrationBlock: { migration, oldSchemaVersion in if (oldSchemaVersion < 1) { migration.enumerateObjects(ofType: Person.className()) { oldObject, newObject in let firstName = oldObject!["firstName"] as! String let lastName = oldObject!["lastName"] as! String newObject!["name"] = "\(firstName) \(lastName)" } } })
If no version is specified for a schema, it defaults to
0.
Migrations With Synced Realms
If a Realm is synced, the rules for performing migrations are a little different.
- functions).
If you can’t use a custom migration function, how do you make schema changes like the previous example on a synced Realm? There are two ways to do it:
- On the client side, you can write a notification handler that performs the changes.
- On the server side, you can write a Node.js function that performs them.
Neither of these will allow you to apply destructive changes to an existing Realm. Instead of doing that, create a new synced Realm with the new schema, then create a function which listens for changes on the old Realm and copies values to the new one. This can happen on either the client or server side.
More about Migrations
To see examples and more details in your preferred language binding, consult the language-specific documentation for migrations: | https://realm.io/jp/docs/data-model/ | CC-MAIN-2018-34 | en | refinedweb |
. And turns out I was right! Calling the following helper method from Spring’s TransactionSynchronizationManager showed that the code was indeed not running within a transaction:
org.springframework.transaction.support. TransactionSynchronizationManager.isActualTransactionActive()
So, what was wrong with my code?
Class-based proxying vs. interface-based proxying
First of all, my Spring transaction configuration using the
<tx:annotation-driven> tag was missing the
proxy-target-class="true" attribute. By default, this attribute is set to
false, such that only interfaces with
@Transactional annotations are proxied. This is useful if you divide your service architecture into a service interface and an implementation. In my case, this was not the case (by design), so it is required to set this attribute to
true for class-based proxies to be created using cglib.
@Transactional local method calls
The second problem I encountered was the way I had implemented my service methods. Have a look at this example:
@Transactional private void loadCategories() { ... categories = em.query(...); } public void getRootCategories() { if(null == categories) loadCategories(); ... }
Now, what’s so bad about this? The bad thing is the local call to the private method
loadCategories(). Because the method is private, cglib is not able to proxy the call, so the code within that method is directly executed, without first establishing a transaction. Also, since we call that method locally, we circumvent the proxy: Imagine the proxy as a wrapper around our service class which deals with the transaction management and then delegates to the implementations we have written. If we directly call our implementation instead of the wrapped method, Spring has no chance of noticing that it needs to set up a transaction.
How can we deal with this? My first idea was pretty straightforward, but currently apparently not supported by Spring (at least using
@Inject): It involves injecting an instance of the service into itself. Now, instead of invoking our (private) methods locally, we could invoke them on the self-injected service instance, which would be a proxy. As I said, Spring does not seem to support this, because it complains that it cannot auto-wire the field (correct me if I’m wrong).
A better alternative for the moment if you want to have transactional local or private method calls is to switch from cglib-based proxying to AspectJ compile-time or load-time weaving. For the German readers of this blog, Ralph Schär gives a detailled explanation of the backgrounds. The basic idea is that instead of wrapping our service with a proxy, the implementation of our service methods is automatically re-written using reflection and byte-code manipulation, such that the transaction management code can directly be “injected” into our original code. To get started, add the
mode="aspectj" attribute to your
<tx:annotation-driven/> tag:
<tx:annotation-driven
Then, add the following two Maven dependencies:
<dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>3.0.5.RELEASE</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>1.6.10</version> </dependency>
Compile-time weaving requires use of a special AspectJ compiler. If you are using Maven, you can include the AspectJ maven plugin to get this working:
<plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <version>1.3</version> <executions> <execution> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> <configuration> <source>1.6</source> <target>1.6</target> <aspectLibraries> <aspectLibrary> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> </aspectLibrary> </aspectLibraries> </configuration> </plugin>
Note that in my case, I had to update my Maven project configuration and then restart Eclipse for AspectJ to begin working properly. After restart, a message popped up, asking me if I wanted to enable the JDT weaving service. Using the AJDT plugin’s Eclipse integration, you will now see aspect markers next to your source code:
For load-time weaving, you instead need to configure a class loader or “agent” that allows load-time weaving. You can either supply an agent within the JVM command line or configure your web container (e.g. Tomcat) to use such a container. See the Spring documentation for more information on this topic (this requires copying a JAR into Tomcat’s lib folder).
While enabling AspectJ weaving does involve a certain configuration effort, it not only solves the problem of (non-)transactional local method calls, but weaving also works if your transactional service methods are private.
Summary
What do we learn from this?
- If your services do not implement a service interface, set
proxy-target-class="true"in the
tx:annotation-drivenXML tag. Otherwise, the transactional service methods will not be proxied, and thus, no transactions will be generated.
- Verify that you have correctly set up transaction management. Use
TransactionSynchronizationManager.isActualTransactionActive()from within a service method to check this.
- Use logging. In his article at dzone.com, Tom Cellucci presents a logging filter for Logback, Log4J and Java Logging that prefixes each logging statement with a
[+]or
[-]sign if a transaction was active at the time the logging statement was written. Also, setting the logging level to DEBUG for the following namespaces can help in diagnosing issues:
org.springframework.aop org.springframework.transaction org.springframework.orm
- Remember that local method calls (i.e.
MyClass#methodA()calling
MyClass#methodB()) will not initiate a new transaction, even if the local method you call is annotated with
@Transactional. This is unless you use AspectJ compile-time or load-time weaving, instead of the default Java interface-based proxies or cglib proxies. Alternatively, you can inject your bean with its own instance, because that (in comparison to the
thisobject) will yield a proxy.
- Also,
@Transactionalmethods need to be public, unless you use AspectJ weaving.
Hi, Thank you for this article. It was very helpful completing my solution in the code below.
public interface Reader extends Iterator{
void setFetchSize(int fetchSize);
void setOffset(int offset);
}
public abstract class EntityList implements Reader {
private List currentBatch;
private E currentResult;
private boolean hasNext;
protected int fetchSize;
protected int offset;
public EntityList() {}
public void setFetchSize(int fetchSize) {
this.fetchSize = fetchSize;
}
public void setOffset(int offset) {
this.offset = offset;
}
public synchronized boolean hasNext() {
if(currentBatch!=null && currentBatch.size()>0) {
return true;
}
currentBatch=getCurrentBatch();
if (hasNext && currentBatch.size()==0) {
return hasNext();
}
return currentBatch != null && currentBatch.size()>0;
}
@Transactional
private List getCurrentBatch() {
List entities = findAll();
hasNext=entities.size() > 0;
offset+=fetchSize;
for (Iterator iterator=entities.iterator();iterator.hasNext(); ) {
final E entity=iterator.next();
if (!isReferenced(entity)) {
remove(entity);
iterator.remove();
offset–;
}
}
return entities;
}
public synchronized T next() {
if (currentBatch != null && currentBatch.size()>0 || hasNext()) {
currentResult=currentBatch.remove(0);
if (currentBatch.size()==0) {
currentBatch=null;
}
return eval(currentResult);
}
throw new NoSuchElementException();
}
@Transactional
public synchronized void remove() {
if (currentResult==null) {
throw new IllegalStateException();
}
remove(currentResult);
offset–;
currentResult=null;
}
protected abstract List findAll();
protected abstract void remove(E entity);
protected abstract boolean isReferenced(E e);
protected abstract T eval(E entity);
public static class Servers extends EntityList {
private ServerDAO serverDAO;
private PortDAO portDAO;
@Override
protected List findAll() {
return getServerDAO().findAll(fetchSize, offset);
}
@Transactional
public void remove(Server server) {
getServerDAO().makeTransient(server);
}
public void setServerDAO(ServerDAO serverDAO) {
this.serverDAO = serverDAO;
}
public ServerDAO getServerDAO() {
return serverDAO;
}
public void setPortDAO(PortDAO portDAO) {
this.portDAO = portDAO;
}
public PortDAO getPortDAO() {
return portDAO;
}
@Override
protected String eval(Server server) {
return server.getName();
}
@Override
protected boolean isReferenced(Server server) {
for (MeasuringPoint measuringPoint:server.getMeasuringPoints()) {
if (measuringPoint.getMeasurementResults().size() > 0) {
return true;
}
Port port=measuringPoint.getPort();
port.getMeasuringPoints().remove(measuringPoint);
if(port.getMeasuringPoints().size()==0) {
getPortDAO().makeTransient(port);
}
}
return false;
}
}
}
@Transaciton is only valid to the thread that obtained the bean from the Spring context. It took me some time to figure out because I was implementing this with SWT application and I was passing the bean obtained in the GUI thread to the background thread. | http://blog.timmlinder.com/2011/01/spring-transactional-checking-for-transaction-support-and-local-method-calls/ | CC-MAIN-2018-34 | en | refinedweb |
Blacklist and Token Revoking¶
This extension supports optional token revoking out of the box. This will allow you to revoke a specific token so that it can no longer access your endpoints.
You will have to choose what tokens you want to check against the blacklist. In most cases, you will probably want to check both refresh and access tokens, which is the default behavior. However, if the extra overhead of checking tokens is a concern you could instead only check the refresh tokens, and set the access tokens to have a short expires time so any damage a compromised token could cause is minimal.
Blacklisting works by is providing a callback function to this extension, using the
token_in_blacklist_loader() decorator.
This method will be called whenever the specified tokens (access and/or refresh)
are used to access a protected endpoint. If the callback function says that the
token is revoked, we will not allow the call to continue, otherwise we will
allow the call to access the endpoint as normal.
Here is a basic example of this in action.
from flask import Flask, request, jsonify from flask_jwt_extended import ( JWTManager, jwt_required, get_jwt_identity, create_access_token, create_refresh_token, jwt_refresh_token_required, get_raw_jwt ) # Setup flask app = Flask(__name__) # Enable blacklisting and specify what kind of tokens to check # against the blacklist app.config['JWT_SECRET_KEY'] = 'super-secret' # Change this! app.config['JWT_BLACKLIST_ENABLED'] = True app.config['JWT_BLACKLIST_TOKEN_CHECKS'] = ['access', 'refresh'] jwt = JWTManager(app) # A storage engine to save revoked tokens. In production if # speed is the primary concern, redis is a good bet. If data # persistence is more important for you, postgres is another # great option. In this example, we will be using an in memory # store, just to show you how this might work. For more # complete examples, check out these: # # blacklist = set() # For this example, we are just checking if the tokens jti # (unique identifier) is in the blacklist set. This could # be made more complex, for example storing all tokens # into the blacklist with a revoked status when created, # and returning the revoked status in this call. This # would allow you to have a list of all created tokens, # and to consider tokens that aren't in the blacklist # (aka tokens you didn't create) as revoked. These are # just two options, and this can be tailored to whatever # your application needs. @jwt.token_in_blacklist_loader def check_if_token_in_blacklist(decrypted_token): jti = decrypted_token['jti'] return jti in blacklist # Standard login endpoint @app.route('/login', methods=['POST']) def login(): username = request.json.get('username', None) password = request.json.get('password', None) if username != 'test' or password != 'test': return jsonify({"msg": "Bad username or password"}), 401 ret = { 'access_token': create_access_token(identity=username), 'refresh_token': create_refresh_token(identity=username) } return jsonify(ret), 200 # Standard refresh endpoint. A blacklisted refresh token # will not be able to access this endpoint @app.route('/refresh', methods=['POST']) @jwt_refresh_token_required def refresh(): current_user = get_jwt_identity() ret = { 'access_token': create_access_token(identity=current_user) } return jsonify(ret), 200 # Endpoint for revoking the current users access token @app.route('/logout', methods=['DELETE']) @jwt_required def logout(): jti = get_raw_jwt()['jti'] blacklist.add(jti) return jsonify({"msg": "Successfully logged out"}), 200 # Endpoint for revoking the current users refresh token @app.route('/logout2', methods=['DELETE']) @jwt_refresh_token_required def logout2(): jti = get_raw_jwt()['jti'] blacklist.add(jti) return jsonify({"msg": "Successfully logged out"}), 200 # This will now prevent users with blacklisted tokens from # accessing this endpoint @app.route('/protected', methods=['GET']) @jwt_required def protected(): return jsonify({'hello': 'world'}) if __name__ == '__main__': app.run()
In production, you will likely want to use either a database or in memory store (such as redis) to store your tokens. In memory stores are great if you are wanting to revoke a token when the users logs out, as they are blazing fast. A downside to using redis is that in the case of a power outage or other such event, it’s possible that you might ‘forget’ that some tokens have been revoked, depending on if the redis data was synced to disk.
In contrast to that, databases are great if the data persistance is of the highest importance (for example, if you have very long lived tokens that other developers use to access your api), or if you want to add some addition features like showing users all of their active tokens, and letting them revoke and unrevoke those tokens.
For more in depth examples of these, check out: | http://flask-jwt-extended.readthedocs.io/en/latest/blacklist_and_token_revoking.html | CC-MAIN-2018-34 | en | refinedweb |
Mongoose networking library aims to run pretty much everywhere. Written in ANSI C, it is extremely portable. However, if we target embedded platforms, then other than writing platform-independent code, we also have to support various embedded implementations of the TCP/IP stack and account for some platform-specific quirks. Mongoose has already been ported to a lot of platforms, e.g. CC3200, ESP8266, STM32, PIC32, to name a few.
What Mongoose can do? Turn a device into a RESTful server - make it controllable via a browser! Or, call an external RESTful server. Or, create a Web UI for a device. Push data in real-time over WebSocket or MQTT. Or, make a device discoverable in the local network (mDNS/dns-SD, Bonjour). Or, many more other possibilities. See a long list of ready-made examples at Github.
We decided that it might also be interesting to run Mongoose on BlueTooth-equipped devices. So today, I'm going to show you how to run Mongoose on either nRF52 or nRF51 Development Kit.
For the sake of simplicity, we'll use an HTTP example. However, Mongoose supports a lot of protocols, including MQTT which is a bidirectional Pub/Sub protocol that can be used either to send data to the device (e.g. adjust some settings) or report some data from the device.
As you know, nRF51/52 has only BlueTooth connectivity, so it cannot connect to the Internet directly; however, Nordic Semiconductor provides software to support 6LoWPAN, i.e. in this case it's IPv6 over BlueTooth.
There is a number of possible ways to establish a 6LoWPAN connection: we can implement a proper commissioning, or just connect devices manually to a host Linux machine. The exact way to establish an Internet connection is out of scope of this text. I'm going to use the option to connect devices manually to a host Linux machine. For our needs today, the bare minimum is enough: make a particular device accessible through a link-local IPv6 address.
Let's begin!
In order to reproduce this, you'll need a Linux machine with a 6LoWPAN-enabled kernel (I tested it on 4.4.0), and BlueTooth 4.0 hardware.
nRF51/52 firmware with Mongoose
Luckily, nRF5 IoT SDK uses LwIP: a popular TCP/IP stack for embedded systems. LwIP was already supported by Mongoose, so, no real porting was required. Only adjust LwIP client code a bit (make it handle IPv6 connections correctly), and glue things together by providing the right compiler flags.
You can see a ready-made example code, Keil uVision project and arm-gcc Makefile here (make sure to read the readme there); or you can follow the guide below and implement it step-by-step. In any case, you'll need a nRF51 IoT SDK for nRF51, and nRF5 IoT SDK for nRF52.
We're going to modify a TCP server example shipped with the SDK. It's located in
examples/iot/tcp/server
.
Mongoose is distributed as a single C source file, plus C header (you can get both of them from the Cesanta website). Adding it to your project is just a matter of adding a single file
mongoose.c
and providing a few compile flags. For nRF51, the bare minimum is:
-DCS_PLATFORM=CS_P_NRF51
. And for nRF52, respectively,
-DCS_PLATFORM=CS_P_NRF52.
We would also like to disable some extra functionality which we don't need at the moment. So, we're going to provide a few more flags:
-DMG_DISABLE_HTTP_DIGEST_AUTH -DMG_DISABLE_MD5 -DMG_DISABLE_HTTP_KEEP_ALIVE
When we add the aforementioned
mongoose.c
to the build and add the needed flags to
CFLAGS
, the project will compile. Not bad! Let's now try to do something with Mongoose.
First of all, we'll need a bit more of the heap memory. By default, it's 512 bytes; for our example, we'll need at least 1024. It is defined as a preprocessor
macro __HEAP_SIZE
.
In the
main()
function, there is some initialization followed by an endless loop in which events are handled. So now, before entering event loop, we're going to add the Mongoose initialization:
struct mg_mgr mgr; /* Initialize event manager object */ mg_mgr_init(&mgr, NULL); /* * Note that many connections can be added to a single event manager * Connections can be created at any point, e.g. in event handler function */ const char *err; struct mg_bind_opts opts = {}; opts.error_string = &err; /* Create listening connection and add it to the event manager */ struct mg_connection *nc = mg_bind_opt(&mgr, "80", ev_handler, opts); if (nc == NULL) { printf("Failed to create listener: %s\n", err); return 1; } /* Attach a built-in HTTP event handler to the connection */ mg_set_protocol_http_websocket(nc);
And obviously, we also need to add a few calls to the event loop itself:
sys_check_timeouts(); mg_mgr_poll(&mgr, 0);
Now, the only thing missing is the
ev_handler
: a callback which will be called by Mongoose at certain events. In this example, it's going to be a simple HTTP event handler:
// Define an event handler function void ev_handler(struct mg_connection *nc, int ev, void *ev_data) { if (ev == MG_EV_POLL) return; /* printf("ev %d\r\n", ev); */ switch (ev) { case MG_EV_ACCEPT: { char addr[32]; mg_sock_addr_to_str(&nc->sa, addr, sizeof(addr), MG_SOCK_STRINGIFY_IP | MG_SOCK_STRINGIFY_PORT); printf("%p: Connection from %s\r\n", nc, addr); break; } case MG_EV_HTTP_REQUEST: { struct http_message *hm = (struct http_message *) ev_data; char addr[32]; mg_sock_addr_to_str(&nc->sa, addr, sizeof(addr), MG_SOCK_STRINGIFY_IP | MG_SOCK_STRINGIFY_PORT); printf("%p: %.*s %.*s\r\n", nc, (int) hm->method.len, hm->method.p, (int) hm->uri.len, hm->uri.p); mg_send_response_line(nc, 200, "Content-Type: text/html\r\n" "Connection: close"); mg_printf(nc, "\r\n<h1>Hello, sir!</h1>\r\n" "You asked for %.*s\r\n", (int) hm->uri.len, hm->uri.p); nc->flags |= MG_F_SEND_AND_CLOSE; LEDS_INVERT(LED_THREE); break; } case MG_EV_CLOSE: { printf("%p: Connection closed\r\n", nc); break; } } }
We might also want to adjust the device name: open
config/ipv6_medium_ble_cfg.h
and set
DEVICE_NAME
to, say,
"Mongoose_example"
.
Make sure you have
#include "mongoose.h"
in your
make.c
, and now we can build and flash our example project! After that, LED1 should turn on, which means that the device is in the advertising mode.
Having that done, we need to finally establish 6LoWPAN connection from our Linux host to the device we've just flashed.
Establish IPv6 over BlueTooth connection
As I mentioned in the beginning of the article, there are a few ways to organize connections; here we're going with the simplest one.
First of all, our device should already advertise itself. Let's check it:
$ sudo hcitool lescan LE Scan ... 00:31:A0:49:01:27 Mongoose_example 00:31:A0:49:01:27 (unknown) 00:31:A0:49:01:27 Mongoose_example 00:31:A0:49:01:27 (unknown)
Awesome, here it is. Before we can connect to it, we need to load and enable the
bluetooth_6lowpan
kernel module:
$ sudo modprobe bluetooth_6lowpan $ sudo bash -c 'echo 1 > /sys/kernel/debug/bluetooth/6lowpan_enable'
Now, we can make a connection by executing the following command (replace
00:AA:BB:CC:DD:EE
with the actual address of your device):
$ sudo bash -c 'echo "connect 00:AA:BB:CC:DD:EE 1" > /sys/kernel/debug/bluetooth/6lowpan_control'
(Note that we can't just
sudo echo "...." > ....
, because in this case the redirection won't be covered by
sudo
)
After that, the LED2 should turn on, which means that the device is connected.
And we can ping it by the link-local address of the following form:
$ ping6 fe80::2aa:bbff:fecc:ddee%bt0
Again, replace
aa
,
bb,
cc
,
dd
,
ee
with the address of your device. If that works, then we can finally access our HTTP endpoint on the device!
$ curl http://[fe80::2aa:bbff:fecc:ddee%bt0]/foo/bar <h1>Hello, sir!</h1> You asked for /foo/bar
Yay!
Conclusion
Now we have the whole power of Mongoose in our nRF51/nRF52 device; we can implement whatever functionality we actually need. Of course, we're limited by the amount of memory available on the device; but given the event-based model which Mongoose uses, it really has a very low footprint.
You can learn more about Mongoose by reading the doc. | https://www.hackster.io/dfrank/mongoose-embedded-networking-library-on-nrf51-and-nrf52-547b15 | CC-MAIN-2018-34 | en | refinedweb |
Apache Nifi Processors in version 0.7.4
For other nifi versions, please reference our default processors post. Check the Apache nifi site for downloads or any nifi version or for current version docs
List of Processors
- AttributesToJSON
- Base64EncodeContent
- CompressContent
- ConsumeAMQP
- ConsumeJMS
- ConsumeKafka
- ConsumeMQTT
- ControlRate
- ConvertAvroSchema
- ConvertAvroToJSON
- ConvertCharacterSet
- ConvertCSVToAvro
- ConvertJSONToAvro
- ConvertJSONToSQL
- CreateHadoopSequenceFile
- DebugFlow
- DeleteDynamoDB
- DeleteS3Object
- DeleteSQS
- DetectDuplicate
- DistributeLoad
- DuplicateFlowFile
- EncryptContent
- EvaluateJsonPath
- EvaluateRegularExpression
- EvaluateXPath
- EvaluateXQuery
- ExecuteFlumeSink
- ExecuteFlumeSource
- ExecuteProcess
- ExecuteScript
- ExecuteSQL
- ExecuteStreamCommand
- ExtractAvroMetadata
- ExtractHL7Attributes
- ExtractImageMetadata
- ExtractMediaMetadata
- ExtractText
- FetchDistributedMapCache
- FetchElasticsearch
- FetchFile
- FetchHDFS
- FetchS3Object
- FetchSFTP
- GenerateFlowFile
- GeoEnrichIP
- GetAzureEventHub
- GetCouchbaseKey
- GetDynamoDB
- GetFile
- GetFTP
- GetHBase
- GetHDFS
- GetHDFSEvents
- GetHDFSSequenceFile
- GetHTMLElement
- GetHTTP
- GetJMSQueue
- GetJMSTopic
- GetKafka
- GetMongo
- GetSFTP
- GetSNMP
- GetSolr
- GetSplunk
- GetSQS
- GetTwitter
- HandleHttpRequest
- HandleHttpResponse
- HashAttribute
- HashContent
- IdentifyMimeType
- InferAvroSchema
- InvokeHTTP
- InvokeScriptedProcessor
- JoltTransformJSON
- ListenHTTP
- ListenLumberjack
- ListenRELP
- ListenSyslog
- ListenTCP
- ListenUDP
- ListFile
- ListHDFS
- ListS3
- ListSFTP
- LogAttribute
- MergeContent
- ModifyBytes
- ModifyHTMLElement
- MonitorActivity
- ParseSyslog
- PostHTTP
- PublishAMQP
- PublishJMS
- PublishKafka
- PublishMQTT
- PutAzureEventHub
- PutCassandraQL
- PutCouchbaseKey
- PutDistributedMapCache
- PutDynamoDB
- PutElasticsearch
- PutEmail
- PutFile
- PutFTP
- PutHBaseCell
- PutHBaseJSON
- PutHDFS
- PutHiveQL
- PutHTMLElement
- PutJMS
- PutKafka
- PutKinesisFirehose
- PutLambda
- PutMongo
- PutRiemann
- PutS3Object
- PutSFTP
- PutSlack
- PutSNS
- PutSolrContentStream
- PutSplunk
- PutSQL
- PutSQS
- PutSyslog
- PutTCP
- PutUDP
- QueryCassandra
- QueryDatabaseTable
- ReplaceText
- ReplaceTextWithMapping
- ResizeImage
- RouteHL7
- RouteOnAttribute
- RouteOnContent
- RouteText
- ScanAttribute
- ScanContent
- SegmentContent
- SelectHiveQL
- SetSNMP
- SplitAvro
- SplitContent
- SplitJson
- SplitText
- SplitXml
- SpringContextProcessor
- StoreInKiteDataset
- TailFile
- TransformXml
- UnpackContent
- UpdateAttribute
- ValidateXml
- YandexTranslate
AttributesToJSON
Generates a JSON representation of the input FlowFile Attributes. The resulting JSON can be written to either a new Attribute ‘JSONAttributes’ or written to the FlowFile as content.
Base64EncodeContent
Encodes or decodes content to and from base64
CompressContent
Compresses or decompresses the contents of FlowFiles using a user-specified compression algorithm and updates the mime.type attribute as appropriate
ConsumeAMQP
Consumes AMQP Message transforming its content to a FlowFile and transitioning it to ‘success’ relationship
ConsumeJMS
Consumes JMS Message of type BytesMessage or TextMessage transforming its content to a FlowFile and transitioning it to ‘success’ relationship. JMS attributes such as headers and properties will be copied as FlowFile attributes.
ConsumeKafka
Consumes messages from Apache Kafka,specifically built against the Kafka 0.9.x Consumer API. The complementary NiFi processor for sending messages is PublishKafka.
ConsumeMQTT
Subscribes to a topic and receives messages from an MQTT broker
ControlRate
Controls the rate at which data is transferred to follow-on processors. If you configure a very small Time Duration, then the accuracy of the throttle gets worse. You can improve this accuracy by decreasing the Yield Duration, at the expense of more Tasks given to the processor.
ConvertAvroSchema
Convert records from one Avro schema to another, including support for flattening and simple type conversions
ConvertAvroToJSON
Converts a Binary Avro record into a JSON object. This processor provides a direct mapping of an Avro field to a JSON field, such that the resulting JSON will have the same hierarchical structure as the Avro document. Note that the Avro schema information will be lost, as this is not a translation from binary Avro to JSON formatted Avro. The output JSON is encoded the UTF-8 encoding. If an incoming FlowFile contains a stream of multiple Avro records, the resultant FlowFile will contain a JSON Array containing all of the Avro records or a sequence of JSON Objects. If an incoming FlowFile does not contain any records, an empty JSON object is the output. Empty/Single Avro record FlowFile inputs are optionally wrapped in a container as dictated by ‘Wrap Single Record’
ConvertCharacterSet
Converts a FlowFile’s content from one character set to another
ConvertCSVToAvro
Converts CSV files to Avro according to an Avro Schema
ConvertJSONToAvro
Converts JSON files to Avro according to an Avro Schema
ConvertJSONToSQL.
CreateHadoopSequenceFile
Creates Hadoop Sequence Files from incoming flow files
Debug.
DeleteDynamoDB
Deletes a document from DynamoDB based on hash and range key. The key can be string or number. The request requires all the primary keys for the operation (hash or hash and range key)
DeleteS3Object
Deletes FlowFiles on an Amazon S3 Bucket. If attempting to delete a file that does not exist, FlowFile is routed to success.
DeleteSQS
Deletes a message from an Amazon Simple Queuing Service Queue
DetectDuplicate
Caches a value, computed from FlowFile attributes, for each incoming FlowFile and determines if the cached value has already been seen. If so, routes the FlowFile to ‘duplicate’ with an attribute named ‘original.identifier’ that specifies the original FlowFile’s “description”, which is specified in the
DistributeLoad
Distributes FlowFiles to downstream processors based on a Distribution Strategy. If using the Round Robin strategy, the default is to assign each destination a weighting of 1 (evenly distributed). However, optional propertiescan be added to the change this; adding a property with the name ‘5’ and value ‘10’ means that the relationship with name ‘5’ will be receive 10 FlowFiles in each iteration instead of 1.
DuplicateFlowFile
Intended for load testing, this processor will create the configured number of copies of each incoming FlowFile
EncryptContent
Encrypts or Decrypts a FlowFile using either symmetric encryption with a password and randomly generated salt, or asymmetric encryption using a public and secret key. ‘auto-detect’ will make a determination based off the configured destination. When ‘Destination’ is set to ‘flowfile-attribute,’ a return type of ‘scalar’ will be used. When ‘Destination’ is set to ‘flowfile-content,’ a return type of ‘JSON’ will be used.If the JsonPath evaluates to a JSON array or JSON object and the Return Type is set to ‘scalar’ the FlowFile will be unmodified and will be routed to failure. A Return Type of JSON can return scalar values if the provided JsonPath evaluates to the specified value and will be routed as a match.If Destination is ‘flowfile-content’ and the JsonPath does not evaluate to a defined path,RegularExpression
WARNING: This has been deprecated and will be removed in 0.2.0.
Use ExtractText instead.
EvaluateXPath
Evaluates one or more XPaths against the content of a FlowFile. The results of those XPaths are assigned to FlowFile Attributes or are written to the content of the FlowFile itself, depending on configuration of the Processor. XPaths are entered by adding user-defined properties; the name of the property maps to the Attribute Name into which the result will be placed (if the Destination is flowfile-attribute; otherwise, the property name is ignored). The value of the property must be a valid XPath expression. If the XPath evaluates to more than one node and the Return Type is set to ‘nodeset’ (either directly, or via ‘auto-detect’ with a Destination of ‘flowfile-content’), the FlowFile will be unmodified and will be routed to failure. If the XPath does not evaluate to a Node,XQuery
Evaluates one or more XQueries against the content of a FlowFile. The results of those XQueries are assigned to FlowFile Attributes or are written to the content of the FlowFile itself, depending on configuration of the Processor. XQueries are entered by adding user-defined properties; the name of the property maps to the Attribute Name into which the result will be placed (if the Destination is ‘flowfile-attribute’; otherwise, the property name is ignored). The value of the property must be a valid XQuery. If the XQuery returns more than one result, new attributes or FlowFiles (for Destinations of ‘flowfile-attribute’ or ‘flowfile-content’ respectively) will be created for each result (attributes will have a ‘.n’ one-up number appended to the specified attribute name). If any provided XQuery returns a result, the FlowFile(s) will be routed to ‘matched’. If no provided XQuery returns a result, the FlowFile will be routed to ‘unmatched’. If the Destination is ‘flowfile-attribute’ and the XQueries matche nothing, no attributes will be applied to the FlowFile.
ExecuteFlumeSink
Execute a Flume sink. Each input FlowFile is converted into a Flume Event for processing by the sink.
ExecuteFlumeSource
Execute a Flume source. Each Flume Event is sent to the success relationship as a FlowFile
ExecuteProcess
Runs an operating system command specified by the user and writes the output of that command to a FlowFile. If the command is expected to be long-running, the Processor can output the partial data on a specified interval. When this option is used, the output is expected to be in textual format, as it typically does not make sense to split binary data on arbitrary time-based intervals.
ExecuteScript.
ExecuteSQLutesql.row.count’ indicates how many rows were selected.
ExecuteStreamCommand
Executes an external command on the contents of a flow file, and creates a new flow file with the results of the command.
ExtractAvroMetadata
Extracts metadata from the header of an Avro datafile.
ExtractHL7Attributes
Extracts information from an HL7 (Health Level 7) formatted FlowFile and adds the information as FlowFile Attributes. The attributes are named as
ExtractImageMetadata
Extract the image metadata from flowfiles containing images. This processor relies on this metadata extractor library. It extracts a long list of metadata types including but not limited to EXIF, IPTC, XMP and Photoshop fields. For the full list visit the library’s website.NOTE: The library being used loads the images into memory so extremely large images may cause problems.
ExtractMediaMetadata.
ExtractText
Evaluates one or more Regular Expressions against the content of a FlowFile. The results of those Regular Expressions are assigned to FlowFile Attributes. Regular Expressions are entered by adding user-defined properties; the name of the property maps to the Attribute Name into which the result will be placed. The first capture group, if any found, will be placed into that attribute name.But all capture groups, including the matching string sequence itself will also be provided at that attribute name with an index value provided, with the exception of a capturing group that is optional and does not match - for example, given the attribute name “regex” and expression “abc(def)?(g)” we would add an attribute “regex.1” with a value of “def” if the “def” matched. If the “def” did not match, no attribute named “regex.1” would be added but an attribute named “regex.2” with a value of “g” will be added regardless.The value of the property must be a valid Regular Expressions with one or more capturing groups. If the Regular Expression matches more than once, only the first match will be used. If any provided Regular Expression matches, the FlowFile(s) will be routed to ‘matched’. If no provided Regular Expression matches, the FlowFile will be routed to ‘unmatched’ and no attributes will be applied to the FlowFile.
FetchDistributedMapCache
Computes a cache key from FlowFile attributes, for each incoming FlowFile, and fetches the value from the Distributed Map Cache associated with that key. The incoming FlowFile’s content is replaced with the binary data received by the Distributed Map Cache. If there is no value stored under that key then the flow file will be routed to ‘not-found’. Note that the processor will always attempt to read the entire cached value into memory before placing it in it’s destination. This could be potentially problematic if the cached value is very large.
FetchElasticsearch
Retrieves a document from Elasticsearch using the specified connection properties and the identifier of the document to retrieve. If the cluster has been configured for authorization and/or secure transport (SSL/TLS) and the Shield plugin is available, secure connections can be made. This processor supports Elasticsearch 2.x clusters.
FetchFile
Reads the contents of a file from disk and streams it into the contents of an incoming FlowFile. Once this is done, the file is optionally moved elsewhere or deleted to help keep the file system organized.
FetchHDFS
Retrieves a file from HDFS. The content of the incoming FlowFile is replaced by the content of the file in HDFS. The file in HDFS is left intact without any changes being made to it.
FetchS3Object
Retrieves the contents of an S3 Object and writes it to the content of a FlowFile
FetchSFTP
Fetches the content of a file from a remote SFTP server and overwrites the contents of an incoming FlowFile with the content of the remote file.
GenerateFlowFile
This processor creates FlowFiles of random data and is used for load testing
GeoEnrichIP.
GetAzureEventHub
Receives messages from a Microsoft Azure Event Hub, writing the contents of the Azure message to the content of the FlowFile
GetCouchbaseKey
Get a document from Couchbase Server via Key/Value access. The ID of the document to fetch may be supplied by setting the
GetDynamoDB.
GetFile
Creates FlowFiles from files in a directory. NiFi will ignore files it doesn’t have at least read permissions for.
GetFTP
Fetches files from an FTP Server and creates FlowFiles from them
GetHBase
This Processor polls HBase for any records in the specified table. The processor keeps track of the timestamp of the cells that it receives, so that as new records are pushed to HBase, they will automatically be pulled. Each record is output in JSON format, as {“row”: “
GetHDFS
Fetch files from Hadoop Distributed File System (HDFS) into FlowFiles. This Processor will delete the file from HDFS after fetching it.
GetHDFSEvents
This processor polls the notification events provided by the HdfsAdmin API. Since this uses the HdfsAdmin APIs it is required to run as an HDFS super user. Currently there are six types of events (append, close, create, metadata, rename, and unlink). Please see org.apache.hadoop.hdfs.inotify.Event documentation for full explanations of each event. This processor will poll for new events based on a defined duration. For each event received a new flow file will be created with the expected attributes and the event itself serialized to JSON and written to the flow file’s content. For example, if event.type is APPEND then the content of the flow file will contain a JSON file containing the information about the append event. If successful the flow files are sent to the ‘success’ relationship. Be careful of where the generated flow files are stored. If the flow files are stored in one of processor’s watch directories there will be a never ending flow of events. It is also important to be aware that this processor must consume all events. The filtering must happen within the processor. This is because the HDFS admin’s event notifications API does not have filtering.
GetHDFSSequenceFile
Fetch sequence files from Hadoop Distributed File System (HDFS) into FlowFiles
GetHTMLElement
Extracts HTML element values from the incoming flowfile’s content using a CSS selector.. The result of “querying” the HTML DOM may produce 0-N results. If no results are found the flowfile will be transferred to the “element not found” relationship to indicate so to the end user. If N results are found a new flowfile will be created and emitted for each result. The query result will either be placed in the content of the new flowfile or as an attribute of the new flowfile. By default the result is written to an attribute. This can be controlled by the “Destination” property. Resulting query values may also have data prepended or appended to them by setting the value of property “Prepend Element Value” or “Append Element Value”. Prepended and appended values are treated as string values and concatenated to the result retrieved from the HTML DOM query operation. A more thorough reference for the CSS selector syntax can be found.
GetJMSQueue
Pulls messages from a JMS Queue, creating a FlowFile for each JMS Message or bundle of messages, as configured
GetJMSTopic
Pulls messages from a JMS Topic, creating a FlowFile for each JMS Message or bundle of messages, as configured
GetKafka
Fetches messages from Apache Kafka, specifically for 0.8.x versions. The complementary NiFi processor for sending messages is PutKafka.
GetMongo
Creates FlowFiles from documents in MongoDB
GetSFTP
Fetches files from an SFTP Server and creates FlowFiles from them
GetSNMP
Retrieves information from SNMP Agent and outputs a FlowFile with information in attributes and without any content
GetSolr
Queries Solr and outputs the results as a FlowFile
GetSplunk
Retrieves data from Splunk Enterprise.
GetSQS
Fetches messages from an Amazon Simple Queuing Service Queue
GetTwitter
Pulls status changes from Twitter’s streaming API
HandleHttpRequest
Starts an HTTP Server and listens for HTTP Requests. For each request, creates a FlowFile and transfers to ‘success’. This Processor is designed to be used in conjunction with the HandleHttpResponse Processor in order to create a Web Service
HandleHttpResponse
Sends an HTTP Response to the Requestor that generated a FlowFile. This Processor is designed to be used in conjunction with the HandleHttpRequest in order to create a web service.
HashAttribute.
HashContent
Calculates a hash value for the Content of a FlowFile and puts that hash value on the FlowFile as an attribute whose name is determined by the
IdentifyMimeType
Attempts to identify the MIME Type used for a FlowFile. If the MIME Type can be identified, an attribute with the name ‘mime.type’ is added with the value being the MIME Type. If the MIME Type cannot be determined, the value will be set to ‘application/octet-stream’. In addition, the attribute mime.extension will be set if a common file extension for the MIME Type is known.
InferAvroSchema.
InvokeHTTP).
InvokeScriptedProcessor
Experimental - Invokes a script engine for a Processor defined in the given script. The script must define a valid class that implements the Processor interface, and it must set a variable ‘processor’ to an instance of the class. Processor methods such as onTrigger() will be delegated to the scripted Processor instance. Also any Relationships or PropertyDescriptors defined by the scripted processor will be added to the configuration dialog. Experimental: Impact of sustained usage not yet verified.
JoltTransformJSON
Applies a list of Jolt specifications to the flowfile JSON payload. A new FlowFile is created with transformed content and is routed to the ‘success’ relationship. If the JSON transform fails, the original FlowFile is routed to the ‘failure’ relationship.
ListenHTTP
Starts an HTTP Server that is used to receive FlowFiles from remote sources. The default URI of the Service will be http://{hostname}:{port}/contentListener
ListenLumberjackRELP
Listens for RELP messages being sent to a given port over TCP. Each message will be acknowledged after successfully writing the message to a FlowFile. Each FlowFile will contain data portion of one or more RELP frames. In the case where the RELP frames contain syslog messages, the output of this processor can be sent to a ParseSyslog processor for further processing.
ListenSyslog
Listens for Syslog messages being sent to a given port over TCP or UDP. Incoming messages are checked against regular expressions for RFC5424 and RFC3164 formatted messages. The format of each message is: (
ListenTCP
Listens for incoming TCP connections and reads data from each connection using a line separator as the message demarcator. The default behavior is for each message to produce a single FlowFile, however this can be controlled by increasing the Batch Size to a larger value for higher throughput. The Receive Buffer Size must be set as large as the largest messages expected to be received, meaning if every 100kb there is a line separator, then the Receive Buffer Size must be greater than 100kb.
ListenUDP
Listens for Datagram Packets on a given port. The default behavior produces a FlowFile per datagram, however for higher throughput the Max Batch Size property may be increased to specify the number of datagrams to batch together in a single FlowFile. This processor can be restricted to listening for datagrams from a specific remote host and port by specifying the Sending Host and Sending Host Port properties, otherwise it will listen for datagrams from all hosts and ports.
ListFile
Retrieves a listing of files from the local filesystem. For each file that is listed, creates a FlowFile that represents the file so that it can be fetched in conjunction with FetchFile. This Processor is designed to run on Primary Node only in a cluster. If the primary node changes, the new Primary Node will pick up where the previous node left off without duplicating all of the data. Unlike GetFile, this Processor does not delete any data from the local filesystem.
ListHDFS.
ListS3
Retrieves a listing of objects from an S3 bucket. For each object that is listed, creates a FlowFile that represents the object so that it can be fetched in conjunction with FetchS3Object. This Processor is designed to run on Primary Node only in a cluster. If the primary node changes, the new Primary Node will pick up where the previous node left off without duplicating all of the data.
ListSFTP
Performs a listing of the files residing on an SFTP server. For each file that is found on the remote server, a new FlowFile will be created with the filename attribute set to the name of the file on the remote server. This can then be used in conjunction with FetchSFTP in order to fetch those files.
LogAttribute
No description provided.
MergeContent
Merges a Group of FlowFiles together based on a user-defined strategy and packages them into a single FlowFile. It is recommended that the Processor be configured with only a single incoming connection, as Group of FlowFiles will not be created from FlowFiles in different connections. This processor updates the mime.type attribute as appropriate.
ModifyBytes
Discard byte range at the start and end or all content of a binary file.
ModifyHTMLElement
Modifies the value of an existing HTML element. The desired element to be modified is located by using CSS selector syntax. to find the element the user desires to modify. If the HTML element is found the element’s value is updated in the DOM using the value specified “Modified Value” property. All DOM elements that match the CSS selector will be updated. Once all of the DOM elements have been updated the DOM is rendered to HTML and the result replaces the flowfile content with the updated HTML. A more thorough reference for the CSS selector syntax can be found at “”
MonitorActivity
Monitors the flow for activity and sends out an indicator when the flow has not had any data for some specified amount of time and again when the flow’s activity is restored
ParseSyslog
Parses the contents of a Syslog message and adds attributes to the FlowFile for each of the parts of the Syslog message
PostHTTP
Performs an HTTP Post with the content of the FlowFile
PublishAMQP
Creates a AMQP Message from the contents of a FlowFile and sends the message to an AMQP Exchange.In a typical AMQP exchange model, the message that is sent to the AMQP Exchange will be routed based on the ‘Routing Key’ to its final destination in the queue (the binding). If due to some misconfiguration the binding between the Exchange, Routing Key and Queue is not set up, the message will have no final destination and will return (i.e., the data will not make it to the queue). If that happens you will see a log in both app-log and bulletin stating to that effect. Fixing the binding (normally done by AMQP administrator) will resolve the issue.
PublishJMS
Creates a JMS Message from the contents of a FlowFile and sends it to a JMS Destination (queue or topic) as JMS BytesMessage. FlowFile attributes will be added as JMS headers and/or properties to the outgoing JMS message.
PublishKafka
Sends the contents of a FlowFile as a message to Apache Kafka, using the Kafka 0.9.x Producer. The messages to send may be individual FlowFiles or may be delimited, using a user-specified delimiter, such as a new-line. The complementary NiFi processor for fetching messages is ConsumeKafka.
PublishMQTT
Publishes a message to an MQTT topic
PutAzureEventHub
Sends the contents of a FlowFile to a Windows Azure Event Hub. Note: the content of the FlowFile will be buffered into memory before being sent, so care should be taken to avoid sending FlowFiles to this Processor that exceed the amount of Java Heap Space available.
PutCassandraQL
Execute provided Cassandra Query Language (CQL) statement on a Cassandra 1.x, 2.x, or 3.0.x cluster. The content of an incoming FlowFile is expected to be the CQL command to execute. The CQL command may use the ? to escape parameters. In this case, the parameters to use must exist as FlowFile attributes with the naming convention cql.args.N.type and cql.args.N.value, where N is a positive integer. The cql.args.N.type is expected to be a lowercase string indicating the Cassandra type.
PutCouchbaseKey
Put a document to Couchbase Server via Key/Value access.
PutDistributedMapCache
Gets the content of a FlowFile and puts it to a distributed map cache, using a cache key computed from FlowFile attributes. If the cache already contains the entry and the cache update strategy is ‘keep original’ the entry is not replaced.’
PutDynamoDB
Puts a document from DynamoDB based on hash and range key. The table can have either hash and range or hash key alone. Currently the keys supported are string and number and value can be json document. In case of hash and range keys both key are required for the operation. The FlowFile content must be JSON. FlowFile content is mapped to the specified Json Document attribute in the DynamoDB item.
PutElasticsearch
Writes the contents of a FlowFile to Elasticsearch, using the specified parameters such as the index to insert into and the type of the document. If the cluster has been configured for authorization and/or secure transport (SSL/TLS) and the Shield plugin is available, secure connections can be made. This processor supports Elasticsearch 2.x clusters.
PutEmail
Sends an e-mail to configured recipients for each incoming FlowFile
PutFile
Writes the contents of a FlowFile to the local file system
PutFTP
Sends FlowFiles to an FTP Server
PutHBaseCell
Adds the Contents of a FlowFile to HBase as the value of a single cell
PutHBaseJSON.
PutHDFS
Write FlowFile data to Hadoop Distributed File System (HDFS)
PutHiveQL
Executes a HiveQL DDL/DML command (UPDATE, INSERT, e.g.). The content of an incoming FlowFile is expected to be the HiveQL command to execute. The HiveQL command may use the ? to escape parameters. In this case, the parameters to use must exist as FlowFile attributes with the naming convention hiveql.args.N.type and hiveql.args.N.value, where N is a positive integer. The hiveql.args.N.type is expected to be a number indicating the JDBC Type. The content of the FlowFile is expected to be in UTF-8 format.
PutHTMLElement
Places a new HTML element in the existing HTML DOM. The desired position for the new HTML element is specified by using CSS selector syntax. The incoming HTML is first converted into a HTML Document Object Model so that HTML DOM location may be located in a similar manner that CSS selectors are used to apply styles to HTML. The resulting HTML DOM is then “queried” using the user defined CSS selector string to find the position where the user desires to add the new HTML element. Once the new HTML element is added to the DOM it is rendered to HTML and the result replaces the flowfile content with the updated HTML. A more thorough reference for the CSS selector syntax can be found at “”
PutJMS
Creates a JMS Message from the contents of a FlowFile and sends the message to a JMS Server
PutKafka
Sends the contents of a FlowFile as a message to Apache Kafka, specifically for 0.8.x versions. The messages to send may be individual FlowFiles or may be delimited, using a user-specified delimiter, such as a new-line. The complementary NiFi processor for fetching messages is GetKafka.
PutKinesisFirehose
Sends the contents to a specified Amazon Kinesis Firehose. In order to send data to firehose, the firehose delivery stream name has to be specified.
PutLambda
Sends the contents to a specified Amazon Lamba Function. The AWS credentials used for authentication must have permissions execute the Lambda function (lambda:InvokeFunction).The FlowFile content must be JSON.
PutMongo
Writes the contents of a FlowFile to MongoDB
PutRiemann NiFi Expression Language.
PutS3Object
Puts FlowFiles to an Amazon S3 Bucket The upload uses either the PutS3Object method or PutS3MultipartUpload methods. The PutS3Object method send the file in a single synchronous call, but it has a 5GB size limit. Larger files are sent using the multipart upload methods that initiate, transfer the parts, and complete an upload. This multipart process saves state after each step so that a large upload can be resumed with minimal loss if the processor or cluster is stopped and restarted. A multipart upload consists of three steps 1) initiate upload, 2) upload the parts, and 3) complete the upload. For multipart uploads, the processor saves state locally tracking the upload ID and parts uploaded, which must both be provided to complete the upload. The AWS libraries select an endpoint URL based on the AWS region, but this can be overridden with the ‘Endpoint Override URL’ property for use with other S3-compatible endpoints. The S3 API specifies that the maximum file size for a PutS3Object upload is 5GB. It also requires that parts in a multipart upload must be at least 5MB in size, except for the last part. These limits are establish the bounds for the Multipart Upload Threshold and Part Size properties.
PutSFTP
Sends FlowFiles to an SFTP Server
PutSlack
Sends a message to your team on slack.com
PutSNS
Sends the content of a FlowFile as a notification to the Amazon Simple Notification Service
PutSolrContentStream
Sends the contents of a FlowFile as a ContentStream to Solr
PutSplunk.
PutSQL.
PutSQS
Publishes a message to an Amazon Simple Queuing Service Queue
PutSyslog
Sends Syslog messages to a given host and port over TCP or UDP. Messages are constructed from the “Message ___” properties of the processor which can use expression language to generate messages from incoming FlowFiles. The properties are used to construct messages of the form: (
PutTCP
The PutTCP processor receives a FlowFile and transmits the FlowFile content over a TCP connection to the configured TCP server. By default, the FlowFiles are transmitted over the same TCP connection (or pool of TCP connections if multiple input threads are configured). To assist the TCP server with determining message boundaries, an optional “Outgoing Message Delimiter” string can be configured which is appended to the end of each FlowFiles content when it is transmitted over the TCP connection. An optional “Connection Per FlowFile” parameter can be specified to change the behaviour so that each FlowFiles content is transmitted over a single TCP connection which is opened when the FlowFile is received and closed after the FlowFile has been sent. This option should only be used for low message volume scenarios, otherwise the platform may run out of TCP sockets.
PutUDP
The PutUDP processor receives a FlowFile and packages the FlowFile content into a single UDP datagram packet which is then transmitted to the configured UDP server. The user must ensure that the FlowFile content being fed to this processor is not larger than the maximum size for the underlying UDP transport. The maximum transport size will vary based on the platform setup but is generally just under 64KB. FlowFiles will be marked as failed if their content is larger than the maximum transport size.
QueryCassandra
Execute provided Cassandra Query Language (CQL) select query on a Cassandra 1.x, 2.x, or 3.0.
QueryDatabaseTablequerydbtable.row.count’ indicates how many rows were selected.
ReplaceText
Updates the content of a FlowFile by evaluating a Regular Expression (regex) against it and replacing the section of the content that matches the Regular Expression with some alternate value.
ReplaceTextWithMapping
Updates the content of a FlowFile by evaluating a Regular Expression against it and replacing the section of the content that matches the Regular Expression with some alternate value provided in a mapping file.
ResizeImage
Resizes an image to user-specified dimensions. This Processor uses the image codecs registered with the environment that NiFi is running in. By default, this includes JPEG, PNG, BMP, WBMP, and GIF images.
RouteHL7
Routes incoming HL7 data according to user-defined queries. To add a query, add a new property to the processor. The name of the property will become a new relationship for the processor, and the value is an HL7 Query Language query. If a FlowFile matches the query, a copy of the FlowFile will be routed to the associated relationship.
RouteOnAttribute
Routes FlowFiles based on their Attributes using the Attribute Expression Language
RouteOnContent
Applies Regular Expressions to the content of a FlowFile and routes a copy of the FlowFile to each destination whose Regular Expression matches. Regular Expressions are added as User-Defined Properties where the name of the property is the name of the relationship and the value is a Regular Expression to match against the FlowFile content. User-Defined properties do support the Attribute Expression Language, but the results are interpreted as literal values, not Regular Expressions
RouteText.
ScanAttribute
Scans the specified attributes of FlowFiles, checking to see if any of their values are present within the specified dictionary of terms
ScanContent
Scans the content of FlowFiles for terms that are found in a user-supplied dictionary. If a term is matched, the UTF-8 encoded version of the term will be added to the FlowFile using the ‘matching.term’ attribute
SegmentContent
Segments a FlowFile into multiple smaller segments on byte boundaries. Each segment is given the following attributes: fragment.identifier, fragment.index, fragment.count, segment.original.filename; these attributes can then be used by the MergeContent processor in order to reconstitute the original FlowFile
SelectHiveQL
Execute provided HiveQL SELECT query against a Hive database connection. Query result will be converted to Avro or CSVselecthiveql.row.count’ indicates how many rows were selected.
SetSNMP
Based on incoming FlowFile attributes, the processor will execute SNMP Set requests. When founding attributes with name like snmp$
SplitAvro.
SplitContent
Splits incoming FlowFiles by a specified byte sequence
SplitJson
Splits a JSON File into multiple, separate FlowFiles for an array element specified by a JsonPath expression. Each generated FlowFile is comprised of an element of the specified array and transferred to relationship ‘split,’ with the original file transferred to the ‘original’ relationship. If the specified JsonPath is not found or does not evaluate to an array element, the original file is routed to ‘failure’ and no files are generated..
SplitXml
Splits an XML File into multiple separate FlowFiles, each comprising a child or descendant of the original root element
SpringContextProcessor
A Processor that supports sending and receiving data from application defined in Spring Application Context via predefined in/out MessageChannels.
StoreInKiteDataset
Stores Avro records in a Kite dataset
TailFile
’.
TransformXml
Applies the provided XSLT file to the flowfile XML payload. A new FlowFile is created with transformed content and is routed to the ‘success’ relationship. If the XSL transform fails, the original FlowFile is routed to the ‘failure’ relationship
UnpackContent
Unpacks the content of FlowFiles that have been packaged with one of several different Packaging Formats, emitting one to many FlowFiles for each input FlowFile
UpdateAttribute
Updates the Attributes for a FlowFile by using the Attribute Expression Language and/or deletes the attributes based on a regular expression
ValidateXml
Validates the contents of FlowFiles against a user-specified XML Schema file
YandexTranslate
Translates content and attributes from one language to another | https://www.nifi.rocks/apache-nifi-processors-version-0.7.4/ | CC-MAIN-2018-34 | en | refinedweb |
<%@ WebHandler Language="C#" Class="Handler" %> using System; using System.Web; public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; context.Response.ContentType = "text/xml"; using (XmlWriter writer = XmlWriter.Create(context.Response.OutputStream, settings)) { writer.WriteStartElement("slides"); writer.WriteStartElement("slide"); writer.WriteAttributeString("imageUrl", "foo.jpg"); writer.WriteAttributeString("thumbnailUrl", "foo_thumb.jpg"); writer.WriteAttributeString("caption", "this is a test"); writer.WriteEndElement(); writer.WriteEndElement(); } } public bool IsReusable { get { return
using System.Xml;
at the beginning of your source code file, and also check the project References, as peter stated.
Check if System.Xml.dll reference is checked. | https://www.experts-exchange.com/questions/23120107/xmlwriter-not-recognised.html | CC-MAIN-2018-34 | en | refinedweb |
GLSL & GLES Small Difference
Hello,
trying a library I made, that I tested on android didn't seem to do what I expected it to do, it seemed that shader effect had a issue.
This is how it works on PC (GLSL)
And how it works on Android (GLES)
As you can see the fade-effect from the light isn't working well on larger distances, it makes a square around the light. That's because the formula I was using divides the distance of the pixel to the center of the light, therefore there's a loss of precision in that calculation.
vec4 effect(vec4 Color, Image Texture, vec2 tc, vec2 pc) { float Distance = distance( vec3(pc, 0.0), Center); if (Distance <= Radius) { return mix(vec4(0.0), vec4(1.0), 1.0 - Distance / Radius); } return vec4(0.0, 0.0, 0.0, 0.0); }
My quick solution was to add this to the shader header.
#ifdef GL_ES #ifdef GL_FRAGMENT_PRECISION_HIGH precision highp float; #else precision mediump float; #endif #endif
Which immediately solved the problem, now it doesn't break on android.
I would propose to add this to the shader headers, that are added to every shader created via newShader, but it is not my duty to evaluate what the cost of using this is going to be (if it's going to cause a bigger overhead, etc).
So it's up to you (developers), this is just an enhancement I'm proposing.
Unfortunately not every OpenGL ES device that love runs on supports highp in pixel shaders, and some of the old devices that do support it will have fairly significant slowdowns if it's used everywhere.
You can mark your distance-related variables with
highpif you don't care about those really low end/old devices.
Another option might be to do the distance calculation in a vertex shader (which does always support and use highp), and pass the normalized result down to the pixel shader, it'll be interpolated across the pixels of the objects you render.
It seems that operating with vectors ( length( DeltaVector / Radius ) ) solves my issue too, perhaps it would be a better to put an advise on the wiki? | https://bitbucket.org/rude/love/issues/1367/glsl-gles-small-difference | CC-MAIN-2018-34 | en | refinedweb |
Problem using subclasses of a Strategy
- frankles_42 last edited by
Hey guys I'm definitely a 'novice' coder so maybe I'm messing something up with the way I'm using classes and it's not backtrader related. Would really appreciate any help. Basically I created a class for a general strategy called MyStrat then I created a subclass called MacdStrat that uses the MACD to place buys and sells. When I had the MACD code inside of MyStrat everything was working great but I decided to make a subclass so that I could make multiple subclasses with different strategies. Here's my code below with my first class and then the subclass. Thanks again.
import backtrader as bt class MyStrat(bt.Strategy): def notify_fund(self, cash, value, fundvalue, shares): self.cash = cash def notify_order(self, order): if order.status in [order.Submitted, order.Accepted]: return if order.status in [order.Completed]: # if order.isbuy(): # self.log("BOUGHT %i SHARES at $%.2f" % (order.executed.size, order.executed.price)) # elif order.issell(): # self.log("SOLD %i SHARES at $%.2f" % (order.executed.size, order.executed.price)) self.shares = self.shares + order.executed.size self.order = None def __init__(self): self.order = None self.shares = 0 self.cash = 0 def next(self): if self.order: return #if self.cash > 0: #BUY CONDITION: #self.log("BUY CREATED at $%.2f" % self.data.close[0]) #self.order = self.buy(size=int(self.cash*0.50/self.data.close[0])) #if self.shares > 0: #if SELL CONDITION: #self.log("SELL CREATED at $%.2f" % self.data.close[0]) #self.order = self.sell(size=int(self.shares*0.50)) def log(self, txt, dt=None): dt = dt or self.datas[0].datetime.date(0) print("%s, %s" % (dt.isoformat(), txt)) class MacdStrat(MyStrat): def __init__(self): self.macd = bt.indicators.MACD(self.data.close) self.cross = bt.indicators.CrossOver(self.macd.macd, self.macd.signal) self.order = None self.shares = 0 self.cash = 0 print(self.cross) def next(self): if self.order: return if self.cash > 0: if self.cross[0] > 0: #self.log("BUY CREATED at $%.2f" % self.data.close[0]) self.order = self.buy(size=int(self.cash*0.50/self.data.close[0])) if self.shares > 0: if self.cross[0] < 0: #self.log("SELL CREATED at $%.2f" % self.data.close[0]) self.order = self.sell(size=int(self.shares*0.50))
@frankles_42 Google
python class super function
That should get you on your way.
- frankles_42 last edited by
@run-out I have tried using the
super()function before and that hasn't really helped. I forgot to include that my error was related to files in the indicators folder of the backtrader package, specifically a file called
basicops.pyand specifically a class that's called
Averagein that file. However now that I try implementing all of the indicators and if statements in the
MyStratclass, now its not working and is giving the same error. So at this point I think I'm messing something up with backtrader and not the classes. THx
@frankles_42 We could probably help better if you share your error codes. Thanks | https://community.backtrader.com/topic/3423/problem-using-subclasses-of-a-strategy | CC-MAIN-2021-10 | en | refinedweb |
Installation
OverviewOverview
Kommunicate is live chat and chatbots powered customer support software. Kommunicate allows you to add customizable live chat SDK to your Android apps. It enables you to chat with your app users and customers through a customizable chat interface.
Installing Kommunicate in your Android app is easy and fast. We will walk you through the procedure so you can start answering your support queries within a few minutes.
InstallationInstallation
Add the following in your app's(app level) build.gradle dependency:
dependencies { //... implementation 'io.kommunicate.sdk:kommunicateui:2.1.4' }
NOTE: Kommunicate requires minimum Android SDK 16 or higher. Be sure to check if that is what you are using.
You can find the minimum android SDK version in your app's build.gradle file, inside
defaultConfig. The field is named as
minSdkVersion.
Building with ProGuardBuilding with ProGuard
If you are using ProGuard in your application, then add the below rules to your proguard-rules.pro file:
#keep JSON classes -keep class * extends com.applozic.mobicommons.json.JsonMarker { !static !transient <fields>; } -keepclassmembernames class * extends com.applozic.mobicommons.json.JsonParcelableMarker { !static !transient <fields>; } #GSON Config -keepattributes Signature -keep class sun.misc.Unsafe { *; } -keep class com.google.gson.examples.android.model.** { *; } -keep class org.eclipse.paho.client.mqttv3.logging.JSR47Logger { *; } -keep class android.support.** { *; } -keep interface android.support.** { *; } -dontwarn android.support.v4.** -keep public class com.google.android.gms.* { public *; } -dontwarn com.google.android.gms.** -keep class com.google.gson.** { *; } | https://docs.kommunicate.io/docs/android-installation.html | CC-MAIN-2021-10 | en | refinedweb |
SP). We can read, write and add data to a file and perform some simple operations (format, rename, retrieve information, etc.)
Introducing the SPIFFS (SPI Flash File System)
SPIFFS (for Serial Peripheral Interface Flash File System) is a file system developed by Peter Andersson (project page on GitHub) that can run on any NOR flash or SPI flash.
The library developed for ESP8266 modules includes most of the functionalities with some additional limitations due to the limitations of microcontrollers:
- there is no file tree. The files are placed flat in the file area. Instead, it is possible to use the “\” character in the file name to create a pseudo tree.there is no file tree. The files are placed flat in the file area. Instead, it is possible to use the “\” character in the file name to create a pseudo tree.
- this is the second important limitation. The ‘\0’ character is reserved and automatically added at the end of the file name for compatibility with C language character strings. Warning, the file extension generally consumes 4 out of the 31 useful characters.this is the second important limitation. The ‘\0’ character is reserved and automatically added at the end of the file name for compatibility with C language character strings. Warning, the file extension generally consumes 4 out of the 31 useful characters.
- in case of error no error message will appear during compilation or at runtime if the limit of 32 characters is exceeded. If the program does not work as expected, be sure to check the file name.in case of error no error message will appear during compilation or at runtime if the limit of 32 characters is exceeded. If the program does not work as expected, be sure to check the file name.
Other useful limitations to know:
- Space (s) or accented character (s) must not be used in the file name
- There is no queue
- The writing time is variable from one file to another
- SPIFFS is for small flash memory devices, do not exceed 128MB of storage
- There is no bad block detection mechanism
Discovery of the SPIFFS.h library, API and available methods
The SPIFFS.h library is a port of the official library for Arduino which is installed at the same time as the ESP32 SDK.
The proposed methods are almost identical to the FS.h library for ESP8266.
The following methods are not available
To access the file system, all you have to do is declare it at the start of the sketch
#include "SPIFFS.h"
How to format a file name (path)?
SPIFFS does not manage the tree.
However, we can create a pseudo tree using the “/” character in the file name without exceeding the limit of 31 useful characters.
The file path must always start with the character “/”, for example /fichier.txt
The methods (API) of the SPIFFS.h library
This method mounts the SPIFFS file system and must be called before any other FS method is used. Returns true if the file system was mounted successfully.
It is advisable to mount the file system in the setup
void setup() { // Launch SPIFFS file system if(!SPIFFS.begin()){ Serial.println("An Error has occurred while mounting SPIFFS"); } }
Format the file system. Returns true if formatting was successful. Attention, if files are present in the memory area, they will be irreversibly deleted.
if (!SPIFFS.begin(true)) { Serial.println("An Error has occurred while mounting SPIFFS"); return; } bool formatted = SPIFFS.format(); if ( formatted ) { Serial.println("SPIFFS formatted successfully"); } else { Serial.println("Error formatting"); }
Open a file
path must be an absolute path starting with a forward slash (eg /dir/file_name.txt).
option is a string specifying the access mode. It can be
- “r” read, read only
- “r +” read and write. The pointer is positioned at the start of the file
- “w” write, write. The existing content is deleted. The file is created if it does not exist
- “w +” opens the file for reading and writing. The file is created if it does not exist, otherwise it is truncated. The pointer is positioned at the start of the file
- “a” append, opens a file adding data. The file is created if it does not exist. The pointer is positioned at the end of the file if it already exists
- “a +” append, opens a file adding data. The file is created if it does not exist. The pointer is positioned at the start of the file for reading and at the end of the file for writing (appending)
Returns the File object. To check if the file was opened successfully, use the Boolean operator.
Once the file is open, here are the methods that allow you to manipulate it
This function behaves like the fseek function of the C language. Depending on the value of mode, the pointer is positioned in the file like this
SeekSet position is set to offset bytes from the start
SeekCur current position is moved by offset bytes
SeekEnd position is set to shift bytes from end of file
The function returns true if the position was set successfully
Returns the current position in the file in bytes.
Returns the size of the file in bytes. Please note, it is not possible to know the size of a folder
File file = SPIFFS.open("/test.txt"); if(!file){ Serial.println("Failed to open file for reading"); return; } Serial.print("File size: "); Serial.println(file.size()); file.close();
Returns the name of the file in a constant in the format const char *
Close the file
Folder operations
There is no difference between file and folder. The isDirectory() method lets you know if the file is a folder. It is not possible to know the size of a folder
Open following folder
Returns true if a file with a given path exists, false otherwise.
Returns the total number of bytes used by the SPIFFS file system.
Returns the space used by the specified file in bytes
Deletes the file based on its absolute path. Returns true if the file was deleted successfully.
Renames the file from pathFrom to pathTo. The paths must be absolute. Returns true if the file was renamed successfully.
Unmounts the filesystem
How to transfer files to the SPIFFS memory area?
It is possible to directly upload files to the SPIFFS file system using the plugin for the Arduino ESP32 Sketch Data Upload IDE.
To do this, simply create a folder named data at the same level as the main Arduino project file. It is better to avoid creating subfolders.
This is because the SPIFFS file system does not manage the file tree. During the transfer, the files will be “flat”, ie the file will take the access path as name.
To learn more, read this tutorial which explains everything in detail.
Retrieve information from the SPIFFS and list of files
Here is a small example of code which allows you to retrieve information from the memory area as well as the list of files found in the memory area.
#include "SPIFFS.h" void listFilesInDir(File dir, int numTabs = 1); void setup() { Serial.begin(112500); delay(500); Serial.println(F("Inizializing FS...")); if (SPIFFS.begin()){ Serial.println(F("SPIFFS mounted correctly.")); }else{ Serial.println(F("!An error occurred during SPIFFS mounting")); } // Get all information of SPIFFS unsigned int totalBytes = SPIFFS.totalBytes(); unsigned int usedBytes = SPIFFS.usedBytes(); Serial.println("===== File system info ====="); Serial.print("Total space: "); Serial.print(totalBytes); Serial.println("byte"); Serial.print("Total space used: "); Serial.print(usedBytes); Serial.println("byte"); Serial.println(); // Open dir folder File dir = SPIFFS.open("/"); // List file at root listFilesInDir(dir); } void listFilesInDir(File dir, int numTabs) { while (true) { File entry = dir.openNextFile(); if (! entry) { // no more files in the folder break; } for (uint8_t i = 0; i < numTabs; i++) { Serial.print('\t'); } Serial.print(entry.name()); if (entry.isDirectory()) { Serial.println("/"); listFilesInDir(entry, numTabs + 1); } else { // display zise for file, nothing for directory Serial.print("\t\t"); Serial.println(entry.size(), DEC); } entry.close(); } } void loop() { }
Open the Serial Monitor to view the occupancy, the available space is the SPIFFS files stored on the flash memory.
Inizializing FS... SPIFFS mounted correctly. File system info. Total space: 1374476byte Total space used: 502byte /test.txt 11
How to write to a file programmatically with SPIFFS.h
We saw how to create a file from a computer and then upload it from the Arduino IDE.
The SPIFFS.h library provides several simple methods for accessing and handling files from an Arduino program. You can use any of the methods listed above.
Add this code just after the file.close(); line
file = SPIFFS.open("/test.txt", "w"); if(!file){ // File not found Serial.println("Failed to open test file"); return; } else { file.println("Hello From ESP32 :-)"); file.close(); }
What does this code do?
This time, we open the file with the option “w” to indicate that we want to modify the file. Previous content will be erased
To write to a file, you can use the print() or println() methods. The println() method adds a newline. We will use it to create a data table for example.
Here, we update the previous content
file.println("Hello From ESP32 :-)");
Upload to see what’s going on
How to add data to a file programmatically?
To add data to a file, just open a file with the “a” (append) option to append data to the end of the file.
If the file does not exist, it will be automatically created.
Here is a small example that records a counter every second.
void loop(){ File file = SPIFFS.open("/counter.txt", "a"); if(!file){ // File not found Serial.println("Failed to open counter file"); return; } else { counter+=1; file.println(counter); file.close(); } delay(1000); }
Updates
02/09/2020 First publication of the post
- | https://diyprojects.io/esp32-get-started-spiff-library-read-write-modify-files/ | CC-MAIN-2021-10 | en | refinedweb |
Subject: Re: [boost] unordered_map failing to compile on MSVC7.1 using STLport
From: Daniel James (dnljms_at_[hidden])
Date: 2012-02-13 16:28:15
On 13 February 2012 17:11, Robert Dailey <rcdailey_at_[hidden]> wrote:
> I would really appreciate some insight here, I have no idea what is going
> on. This is actually preventing company code from compiling, so it's
> extremely important. If I can't get help here the only thing I can do is
> just not use boost.
>
> The "call stack" chain for template instantiation provided in my pasted
> error is confusing. It shows the source of where I create the
> unordered_map, which is in gdgalquery.h(40), but after that it says that
> _locale.h is next? How is _locale.h next? That tells me that STL is
> instantiating my class? Anyone know what is going on?
Sorry, I missed this before. The main problem is that we no longer
have Visual C++ 7.1 testers, so regressions for that compiler are
quite likely. In this case the problem is with support for C++11
allocators. There's an emulated version of C++11's allocator traits
which doesn't seem to work for Visual C++ 7.1. The simplest way to fix
that is probably to disable most of the new features for older
versions of Visual C++, and hope that everything else works okay.
Although it might be possible to get the new features to work on
Visual C++ 7.1. There are two possible places where it's failing.
First is the has_select_on_container_copy_construction trait, second
is the use of SFINAE the disable the function where you saw this
error. This detects in an allocator has a
has_select_on_container_copy_construction member. Try running this:
#include <iostream>
#include <boost/unordered_map.hpp>
int main()
{
std::cout << boost::unordered::detail::
has_select_on_container_copy_construction<
std::allocator<int>
>::value << std::endl;
}
It it prints '0' the trait has successfully found that std::allocator
doesn't have the member, so we know the problem must be in the use of
SFINAE, which might be fixable (I don't know how myself, but I think
other libraries do it). If it prints '1' the trait failed, and I'm not
sure how to fix it.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2012/02/190436.php | CC-MAIN-2021-10 | en | refinedweb |
- Issued:
- 2017-04-12
- Updated:
- 2017-04-12
RHSA-2017:0931 - Security Advisory
Synopsis
Important: kernel-rt security and bug fix update
Type/Severity
Security Advisory: Important
Topic
An update for kernel-rt):
- A-8650, Moderate)
- A flaw was found in the Linux kernel's implementation of setsockopt for the SO_{SND|RCV}BUFFORCE setsockopt() system call. Users with non-namespace CAP_NET_ADMIN are able to trigger this call and create a situation in which the sockets sendbuff data size could be negative. This could adversely affect memory allocations and create situations where the system could crash or cause memory corruption. (CVE-2016-9793, Moderate)
- A flaw was found in the Linux kernel's handling of clearing SELinux attributes on /proc/pid/attr files. An empty (null) write to this file can crash the system by causing the system to attempt to access unmapped kernel memory. (CVE-2017-2618, Moderate)
Red Hat would like to thank Alexander Popov for reporting CVE-2017-2636 and Ralf Spenneberg for reporting CVE-2016-8650. The CVE-2017-2618 issue was discovered by Paul Moore (Red Hat Engineering).
Bug Fix(es):
- Previously, a cgroups data structure was sometimes corrupted due to a race condition in the kernel-rt cgroups code. Consequently, several system tasks were blocked, and the operating system became unresponsive. This update adds a lock that prevents the race condition. As a result, the cgroups data structure no longer gets corrupted and the operating system no longer hangs under the described circumstances. (BZ#1420784)
- The kernel-rt packages have been upgraded to the 3.10.0-514.16.1 source tree, which provides a number of bug fixes over the previous version. (BZ#1430749) - 1395187 - CVE-2016-8650 kernel: Null pointer dereference via keyctl
- BZ - 1402013 - CVE-2016-9793 kernel: Signed overflow for SO_{SND|RCV}BUFFORCE
- BZ - 1419916 - CVE-2017-2618 kernel: Off-by-one error in selinux_setprocattr (/proc/self/attr/fscreate)
- BZ - 1428319 - CVE-2017-2636 kernel: Race condition access to n_hdlc.tbuf causes double free in n_hdlc_release()
- BZ - 1430749 - kernel-rt: update to the RHEL7.3.z batch#4 source tree [RT-7.3.z]
Red Hat Enterprise Linux for Real Time 7
Red Hat Enterprise Linux for Real Time for NFV 7
The Red Hat security contact is secalert@redhat.com. More contact details at. | https://access.redhat.com/errata/RHSA-2017:0931 | CC-MAIN-2021-10 | en | refinedweb |
GREPPER
SEARCH SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
C#
>>
print line in python
“print line in python” Code Answer’s
python new line
python by
Salty Joe
on Oct 31 2020
Donate
3
print("First Line \n" "Second Line")
print line in python
csharp by
Blue Bat
on May 27 2020
Donate
0
# hello world in python print("Hello World!")
Source:
C# answers related to “print line in python”
break a line python
break line in string python
go to line in python
how to print a blank line in python
how to print a line in python
how to print a line of code in python
how to print items in a list in a single line python
line break in python code
new line print python
print each element of list in new line python
print file line by line python
print from 1 to n in python
print linebreak python
print list in one line python
print new line python
print no new line python
print( n ) in python
python print empty line
python print list with newline
python print with new line
python print without new lines
python write line break
python writelines newline
write lines python with line breaks
C# queries related to “print line in python”
python putting .method on new line
python -c new line
how to code a start on new line in python
print line by line python
entering new line in python
how to print a line
python newline Text
newlines python
What is a newline character in python?
print and go to next line python
python print newline -site:pinterest.*
python input write in new line
python newline
how to add new line in python
retrun to line print python
python how to write on a new line
search char in python
python newline print
how to write in the next line in python
python "/n"
print enter in python
how to move down to the next line python
how to get new line in python
python line in str
new line python print
how to make a new line in pythin
python linebreaks
where to use \n in python
print skip line python
how to do python new line
change line in python
how to move to the next line in python
python how to print new line
string endlines python
python print straight line
print lines python
n/ python
change line separator for printing python
newline='' python output
print a new line in python
python print adds line jump
linebreak python
how to print values in a different line python
python break line char
how to skip new line in python
new line in python string
new line python command
new line python string
python print new line
hwo to add new line in python
new line python code
python newline in code
add \n end of line python print
print with newline characters
python n in print
enter in python
code new line pyhton
print with new line and formating python
how to go in next line in python
python if get next line
python next line
line between print value python
how to print on one line in python
python separate with print
how to break a line in python
python move to next line
python go onto new line
newline character python
symbol the come to next line in python
python new line in code
how to write out a new line in python
new line in string without space python
for line break python
how to break line in python
\n python
python n print
print with new line python
how to leave a line in python
how to print new line character in python
separate line in python by ;
python print(\n)
pyhton new line
n: python
print line down in python
break line python
python print end new line
python print line change
how to separate different lines in python
python end of text [
python printnewline
next line python
python prints blank line
python code new line
python print new line list
print python built in new line
how to use n as a string in python
how to add blank lines in python
line break python
python print entire line
add new line .write python
why need a new line in python
new line in print python
jump a line python
print data in same line in python
enter nre line python
enter line python
new line character not working python
how to give new line in list in python
end='' beginning of line python
newline python string
how to go to a new line in python
python newline in text
move python to new line
next line in python
going to next linein oython
how to separate lines in python
how to add new line in python using print
add new line in string python
how to go to next line python
in python how do i make a new line in string
python how to get new line
when to have a new line in python code
newline python
how to line break in python
/n python
python print visible line
how to print a line in python
how to change line in python
write one line up python print
python paragraph break
python new line in text
python end line
python new line list into
newline in python
how to use n in print statement in python
printing innext line python
python print change line
how to newline in python
how to create a new line in python
python print new line character
python \n
how to give line space in python print statememt
entry with a new line python
line change in python
new line character python
new line character in python
does print add /n characters
nextline vs newline in python
new line in python print
python prine line
print in new line in python
new line syntax in python
python go to next line
python print in new line
how to skip to a different line in python
string line python
how to add line in print pthone
how to end a line in python
python python with new line
python line break
how to use new line in python
how to print newline'
how to show output in a different line in python
how to add a new line with one print statement
printing line jump in python
print in same line python3
line break in python
how to do a new line in python script
how to write to new line python
python print no linefeed
new line code python
new line python
end line python
print method in python in single line
how to show the /n in python
python print \n
make new line in python
how to p[rint in new line
python new line
py print line
equvalent of print and println in python
getting print in ython with new line
python pring lines
python 3 print on same line
python print without starting new line
printlinein python
how to print in a line python
print python 3 wo new line
python print lines
python 3.8 print extra newline
print without print python
how to print numbers in python next to each other without adding then
print in a line in python
print() and printline python
in line print python
print in line python
print python without newline
python print lin
print line python
print line in python
print statement in python changing line
print line in python 3
python print line
python print line and value
print line pythom
how to print line in python
print line in python
Learn how Grepper helps you improve as a Developer!
INSTALL GREPPER FOR CHROME
More “Kinda” Related C# Answers
View All C#
how to print in python).
code for showing contents of a file and printing it in python
python get input
get working directory python
python writelines newline
get path to current directory python
superscript print python
how to print a blank line in python
python create file if not exists
one. line if statement. python
python print without newline
python write to file
python get all file names in directory
python multiline docstring styles
how do you use a print statement in python
python check if file has content
Write a Python program to read last n lines of a file
creating a new folder in python
get text from txt file python
python reference script directory
get files in directory python
print string and variable python
find the path to a file python
python copy file
move file python
python execute string
exit python terminal
exception get line number python
python get line number of error find word in file python
how to reference a file in
python edit text file
.center in python
get_terminal_sizee python
python open folder
python list all files in directory
how to know how much lines a file has using python
can you print to multiple output files python
print python
python how to print
how to say hello in python
python print same line
os walk example
open choose files from file explorer python
exception pyton print
extract url from page python
how to print text after an interger
python os if file exists
read shp in python
python exit program
print items in object python
Write a Python program to read first n lines of a file
input stdin python
how to print items in a list in a single line python
fastest way to output text file in python + Cout
python write to command prompt
read file python
python check if string is in input
print textbox value in tkinter
1 line if statement python
How to open dialog box to select folder in python
what is a print statement
python hide details
python write a list to a file line by line
python change line in code
python read file without newline
python overwrite text that is already printed
python print format
how to add variables and text in python on same line
print statements
get text selenium
typage in python
python sys.argv
exit python script
python get html info
get list input from user in python
get list of all files in folder and subfolders python
python read arguments
how to make inputs in a loop in python
python get copied text
how to take multiple input in python
how to take input until a condition is satisfied in python
how to read a text file line by line in python
array storing in file by python
taking input of array in python
python how to get the folder name of a file
how to read a specific line from a text file in python
store all files name in a folder python
python list directory files
python one line if without else
print each element of list in new line python
how to read a file into array in python
how to print to command line python
python writeline file
python get dir
how to print on the same line python
input command in python shell
create text file in directory python linux
how to take multiple integer input in python on one line
print colored text python on terminal
print in text file python
python input
python dont exit script after print
python file handling
how to cout in python
print multiple lines python
how to check if file exists pyuthon
read entire content of a file using infile python
how to get python to extract information from a text file
python center window
new line python write to file
hello world in python
python execute shell command and get output
python read text file into string
if else python in single line
python print array
python print object
how to run cmd line commands in python
how to take input of something in python
how to read text file in python
exit in python
python write
import files from another folder python
input python 3
print variable string python
How can I get terminal output in python
python end script
exit py console
text based game python
python file location path
how to check the size of a file in python
python print dictionary line by line
python get script path
python file reading
python read lines
how to check if a string is in a file python
get current directory python
what is print in python
python hoe to make a text box
print linebreak python
python read from stdin
python list directories only
print file line by line python
python 2 print sep end
python how to open a file in a different directory in mac
print()
log in python
python get path of current file
python print string and variable
python use variable in another file
import from another file python
python create file
python mock input
python append a file and read
traverse files in directory python
python multiline string
how to get all folders on path in python
python replace text in file
python multiple inputs
python logging to file
python pretty print
how to save to file in python
print( n ) in python
python load text file line by line
.get python
python os move file
python create file if doesbt exist
python read text file to list
get file names in folder python
python single line if
python open a url
python read a directory to get all files in sub folders
print string python
how to search a file in windows 10 using python
python open each file in directory
accept user input in python
python open file relative to script location
python print without new lines
how to compare two text files in python
reading and writing data in a text file with python
call shell script from python
python use functions from another file get what type of file in python
how to take input according to the user in python
how to read a website in python
python loop opening file from directory
how to print a line in python
how to do an if input = python
create log in python
import file from another directory python
python open file from explorer
printf python
python zip folder
zip a directory in python
open word from python
open word document python
how to take input in python
python import filenames that match search
read only the first line python
python get newest file in directory
python how to get directory of script readlines end of file
run python file using python code
python input variable
how to get a user input in python
python get list of files in directory directory of current script file
python get list of all open windows
python file from path
how to get value from txtbox in flask
reading a list in python
python save input to text file
open file in python directory
how to separate url from text in python
print hello world in python
python get current directory
check dir exist python
how to take array input in python in single line
python execute file
how to get what type of file a file is in python
print command in python
read input from
how to read files in python
crawl a folder
python all option
ignore error open file python
editing specific line in text file in python
how to get text of a tag in selenium python
how to print hostname in python
python file back to beginning
exit code python
pyhton comment
You will be provided a file path for input I, a file path for output O, a string S, and a string T. to console python
how to print stdout while processing the execution in python
open web page in python
print command python
file python
how to store something in python
how to take an input in python
how write to file in python
Python program to combine each line from first file with the corresponding line in second file
print output python to file
python print with new line
python file open try except error
python prevent print output
filter directory in python
python printing hello world
one line if statement python without else
python code in 1 line
f.readline python
how to use one with as statement to open two files python
Python program to read a random line from a file
text variable tkinter
create file in a specific directory python
python style console output
python textbox
print python format
Python File Write
how to print multiple strings on one line in python
Python program that takes a text file as input and returns the number of words of a given text file
python monitor directory for files count
python find file name
how to make a text file in python
how to get the current line number in python
input code for python
how to take 2 input in same line in python
check hidden files pycharm
python how to align text writen to a file
printing hello world in python
python how to exit function
python write subprocess stdout stderr to file
python include another python file
how to exit a function python
how to return an output to a function in Python
make a script run itself again python
run python with options
how to print messages in python
python sys.argv exception
python oneline if statement
one line if statement python
get the name of a current script in python
change part of a text file python
python readline
In with statement in pyhton should we close the file?
python carriage return
input stdout python
how to exit program in python
print multiple strings in python
python print advanced
how to access a txt file through os library in python
how to make a function like print in python
python how to write a variable
how to input in python
how to print python text
python get
pyelastic search get document
python multiline code dot
how to display text in command in python
load text read line by line python
python dir all files
reader in python
how to keep old content when using write in python
getters python
python code to print hello world
python comments
python add new line from textarea
store content to file python
return result from exec python
how to save the command result with ! in python
addinput() python
get path from file path python
how to print 's in python
python writelines
python read entire file
Python program to read a random line from a file.
piythin comment
print("python is good")
python functions with input
how to run multiple python files one after another
Python sys info
how to use print in python
python check file format
read file from form input type filepython
statement used to open a file in python
Python program to assess if a file is closed or not
print format python
python sys.stderr.write
python output parameter
file input python
how to empty a text file in python
how to load a python file in console, pycharm ?
get string from terminal python
how to use inputs in python
input int python
python how to get current line number
pytest input()
python print statements
print variable python
Search bar python program
how print hello world in python
how to comment multiple lines in python ion sublime text
write help text python script
python print array line by line
python on read text execute command
how to let someone select a folder in python
read yml file in python
exec to return a value python
python load a txt file and assign a variable
Search bar using python
find location of a class in python
how to use print statement in python
how to break up xml data in python
python recursion print statement
como fazer print no python
python script to recursively scan subdirectories
python check if file is writable
how to input a full array in one input in python
multi line cooment in python
readline python sin avanzar de linea
outputting file logs into text in python
python reading into a text file and diplaying items in a user friendly manner
python print all variables in memory
reading text file in python
Python move all files with extension
two type separatos read file python
how to read a data file in python and build a list of files
jupyter notebook - run code line by line
get script text selenium python
python multiple line string
comment in python w3schools
print statement in python
python zip folder and subfolders
python import multiple lines
hoow to print python
load text file line in listbox python tkinter site:stackoverflow.com
python how to say hello world
get list of files in directory python
multiline comment python stack overflow
allow user to input text and create a file with that name in python
use ipython magic in script
Write a Python program to count the number of lines in a text file.
how to read file again in python
python console ending multiline input
what is the correct way to output a string to the console in python
python with file
python input separated by
comment all selected lines in python
print function python
python how to write code over multiple lines
what does \n do in python?
how to read a pkl file in python
jupyter notebook pass python variable to shell
multiline f string python
python iterate over line of gzip file
how to write statements in python
python get names of input arguments
*open(0) in python
how to return paragraph in python
how to read specific words from a file in python
python logger printing multiple times
python make return on multiple lines
extract directory python
write in multiple files python
reopen closed file python
python global variable across files
python inline print variable
is python not a real programing laguage because lines dont end in ;
Python program to get the file size of a plain file.
python commenting
python printing
dir() in python
python event start from file funcion
int and text on same line python
You will be passed a file path P and string S on the command line. Output the number of times the string S appears in the file P.
python input new line
in python how to end the code after 10 input
python print functoin
how to get input in python3
python print an array
open() python all flags
python open folder in explorer
python lxml get parent
continue reading lines until there is no more input python
how to create a save command in python
print fps in while loop python
method get first last name python
python read file list from directory
getting vocab from a text file python
how to take user input and create a file in python
how to import file from another directory in python
come traferire file python
how to print a line of code in python
with open python print file name python | https://www.codegrepper.com/code-examples/csharp/print+line+in+python | CC-MAIN-2021-10 | en | refinedweb |
![: 1508
Can you explain any C28 device specific items within the TI-RTOS kernel?
The following shows most SYS/BIOS configuration settings using *.cfg source code snippets. You can also configure 28x applications using the XGCONF Graphical Configuration Tool. For example, when you open the tool, you see the Welcome screen. You can click the System Overview button to see a block diagram of the available SYS/BIOS modules. Modules used by your application have a green checkmark. Notice the modules circled in red in the following figure; they have configuration settings that are specific to 28x devices.
If you click the Device Support button at the top of the XGCONF page, you see a list of SYS/BIOS modules that apply to the 28x. You can click those links to see a configuration page for each module.
Note that some 28x devices do not have enough memory to run SYS/BIOS applications. See the release notes in your SYS/BIOS installation for a detailed, up-to-date list of supported devices.
If you are using a Concerto device, this page describes how to use SYS/BIOS with the 28x part of the Concerto.
First, please look at the general boot for a device with TI-RTOS:.
For 28x devices, SYS/BIOS installs a configurable Startup reset function called ti_catalog_c2800_init_Boot_init(), which is provided by XDCtools. Depending on your configuration of the ti.catalog.c2800.init.Boot module, this function performs the following actions:
You can configure what the Startup reset function does using the C28x Boot module. First, in XGCONF select the Boot module from the SYS/BIOS System Overview page:
You will see the configuration page for the 28x Boot module.
Watchdog timer: By default, the watchdog timer is enabled, meaning that if the system hangs long enough for the watchdog timer's 8-bit counter to reach its maximum value, a system reset is triggered. To prevent this, you can check the "Disable the watchdog timer" box in XGCONF or use the following configuration statements:
var Boot = xdc.useModule('ti.catalog.c2800.init.Boot');
Boot.disableWatchdog = true;
Boot from Flash: If you want to be able to boot this application from Flash, check the "Enable boot from FLASH" box in XGCONF or use the following configuration statements:
var Boot = xdc.useModule('ti.catalog.c2800.init.Boot');
Boot.bootFromFlash = true;
If you configure your application to boot from Flash, a long branch (LB) to the c_int00 entry point will be placed at the BEGIN section address defined in the linker command file.
PLL configuration: The phase-locked loop (PLL) on 28x devices is used to maintain correct clocking rates.
Note: On 280x and 281x devices, XDCtools and SYS/BIOS do not configure the PLL. The PLL is in its default state--bypassed but not turned off--after your application finished booting. If you want to modify the configuration of the PLL on these devices, you can add a Reset function as described at this end of this step (2).
By default, XDCtools automatically enables the PLL for your 2802x, 2803x, 2806x, 282xx, 283xx, or 2834x device. If you want to override the default PLL configuration, you can check the "Configure the PLL" box in XGCONF and set values for the following parameters:
Boot.pllOSCCLK * Boot.pllcrDIV * 1000000) / divider
For example, the following configuration statements configure the PLL to a slower rate.
var Boot = xdc.useModule('ti.catalog.c2800.init.Boot');
Boot.configurePll = true;
Boot.pllOSCCLK = 8;
Boot.pllcrDIV = 2;
Boot.pllstsDIVSEL = 0;
User-defined reset functions: If you want to add your own functions to the table of early reset functions, you can do so by adding statements like the following to your application configuration file:
Reset = xdc.useModule('xdc.runtime.Reset');
Reset.fxns[Reset.fxns.length++] = '&myResetFxn';
SYS/BIOS provides the same basic hardware interrupt functionality on 28x devices as it does on other devices. In addition, it provides the ability to configure and use the Peripheral Interrupt Expansion (PIE) block and the zero-latency interrupts supported by 28x devices. The general ti.sysbios.hal.Hwi module is implemented and extended with device-specific functionality by the ti.sysbios.family.c28.Hwi module.
If you want to use only the generic Hwi module run-time APIs and static configuration settings to manage hardware interrupts in your 28x application, you should use the ti.sysbios.hal.Hwi module. To use this module, include the following in your code:
#include <ti/sysbios/hal/Hwi.h>
var Hwi = xdc.useModule('ti.sysbios.hal.Hwi');
Alternately, you can use the 28x-specific versions of the run-time APIs and static configuration settings provided by the ti.sysbios.family.c28.Hwi module. These include some additional features for 28x only. To use this module, include the following in your code:
#include <ti/sysbios/family/c28/Hwi.h>
var Hwi = xdc.useModule('ti.sysbios.family.c28.Hwi');
The 28x Peripheral Interrupt Expansion (PIE) block multiplexes numerous interrupt sources into a smaller set of interrupt inputs. The interrupts are grouped into blocks of either eight or sixteen depending on the device and each group is fed into one of 12 CPU interrupt lines (INT1 to INT12). Each of the interrupts is supported by its own vector stored in a dedicated RAM block that serves as a PIE interrupt vector table. The reference guides linked to in the System Control and Interrupts Reference Guides for C28x topic describe the PIE block in more detail.
The vector is automatically fetched by the CPU when servicing the interrupt. It takes nine CPU clock cycles to fetch the vector and save critical CPU registers. Therefore, 28x CPUs can respond quickly to interrupt events. Each individual interrupt can be enabled/disabled within the PIE block.
You use the same Hwi_create() API and Hwi.create() static configuration method to create Hwi objects for the 32 core interrupt lines (INT1 to INT12) and for individual PIE interrupts. Interrupt numbers 0-32 apply to the core interrupt lines (INT1 to INT12), and numbers 32-223 apply to individual PIE interrupts.
The following table shows the mapping between the PIENUM (interrupt ID) and the PIE groups. INTX.Y is the interrupt number for the PIE interrupt belonging to group X and group-specific id Y. For example, in PIE group 2, interrupt 3 would be INT2.3 with an interrupt ID of 42. Note that columns INTX.9-INTX.16 don't apply to all devices; see your device's technical reference manual for more information.
For example, the following configuration code plugs the function 'myHwi' into the vector table for PIE group 5, interrupt 1. As the above table shows, this corresponds to interrupt ID 64:
var Hwi = xdc.useModule('ti.sysbios.family.c28.Hwi');
/* PIE group 5, interrupt 1 */
var interruptNum = 64;
var hwiParams = new Hwi.Params();
hwiParams.arg = interruptNum;
Hwi.create(interruptNum, "&myHwi", hwiParams);
In addition to creating Hwi objects to service PIE interrupts, you can use the following APIs from the ti.sysbios.family.c28.Hwi module to manage PIE interrupt handling at run-time:
Similar APIs--Hwi_disableIER(), Hwi_enableIER(), and Hwi_restoreIER()--can be used to disable, enable, and restore the core interrupt lines (INT1 to INT12) at run-time.
The Hwi_clearInterrupt() API can be used to clear an interrupt's pending status. For 28x devices this function clears a PIEIFR bit if the interrupt number is a PIE interrupt number. It clears an IFR bit if this is an interrupt number between 1 and 14.
The following additional configuration parameters are provided for configuring 28x interrupts:
You can pass a bitmask to the Hwi.zeroLatencyIERMask property to identify interrupts that should never be disabled. The advantage to doing this is that such interrupts will have minimal interrupt-to-ISR execution time. (Though the property name is named "zero latency", the latency is actually minimized, but not zero.) The disadvantage to using this setting is that when SYS/BIOS disables or enabling interrupts, extra work is required to avoid making changes to such interrupts.
For example, the following configuration code sets INT5 and the PIE group 5 multiplexed to INT5 to provide minimal latency:
var Hwi = xdc.useModule('ti.sysbios.family.c28.Hwi');
Hwi.zeroLatencyIERMask = 0x0010;
Bit 0 in the IER mask corresponds to INT1 and PIE group 1. Bit 1 corresponds to INT2 and PIE group 2, and so on through INT12. Bits 12-16 are used for other purposes, and should not be set in the zeroLatencyIERMask. In the previous example, a mask of 0x0010 sets bit 4, which corresponds to INT5.
By default, the Hwi.zeroLatencyIERMask property is shown as a decimal value in XGCONF:
Note that all the interrupts in the PIE group whose bit is set in the zeroLatencyIERMask will be treated as zero latency interrupts.
Note: We recommend that you use the zeroLatencyIERMask only if all interrupts in the groups execute non-SYS/BIOS interrupt handlers. This feature is best used only in applications that demand very low latency.
CPU interrupts specified in this mask (which corresponds to the 16-bit IER register) are not disabled by the Hwi_disable() call and are generally left enabled except when explicitly disabled using Hwi_disableIER() or Hwi_disablePIEIER().
If you use zero latency mode for any interrupt, the code used to disable, enable, and restore interrupts in SYS/BIOS will be slower. This is because the code needs to set individual bits in the IER register rather than setting the INTM bit. It is important to be aware of the performance tradeoff associated with using zero latency interrupts before using this feature.
To consolidate code that performs register saving and restoration for each interrupt, SYS/BIOS provides an interrupt dispatcher that automatically performs these actions for an interrupt routine. Use of the Hwi dispatcher allows ISR functions to be written in C. In addition to preserving the interrupted thread's context, the SYS/BIOS Hwi dispatcher orchestrates the following actions:
By default, all Hwi interrupts created statically or dynamically with SYS/BIOS are routed to the interrupt dispatcher.
If you have some interrupts that you do not want serviced by the SYS/BIOS interrupt dispatcher, you should create them using the Hwi_plug() run-time API. Such ISR functions are directly plugged into the vector table. Hwi_plug() can only be used for ISR functions that do not call any SYS/BIOS APIs. If you are using Hwi_plug(), you should also be aware of potential timing and resource access issues between ISRs that are and are not managed by the SYS/BIOS interrupt dispatcher.
If you use Hwi_plug() for any PIE interrupts, your application must clear the CPU acknowledge bit manually for the respective PIE block before further interrupts from that block can occur. The SYS/BIOS interrupt dispatcher normally takes care of this. (This differs from DSP/BIOS 5, in which the application had to acknowledge the interrupt.) If your application contains legacy code with HWI instances created with the legacy-support ti.bios.HWI module, the HWI function must also clear the CPU acknowledge bit manually before returning.
Here is a F28379D example of adding a zero-latency interrupt via Hwi_plug(): /cfs-file/__key/communityserver-discussions-components-files/171/4555.C28_5F00_zero_5F00_latency.pdf
There are two limitations to parameters in the Hwi.Params structure used when you create a Hwi object on 28x devices:
The 28x devices have three 32-bit timers--Timer 0 through Timer 2. By default, SYS/BIOS uses two of these timers, one for the Clock module and one for the Timestamp module. Typically, the timers used by SYS/BIOS are Timer 0 and Timer 1, but if your application uses a timer for its own processing, SYS/BIOS will use Timer 2 if necessary.
You can control which 28x timers used by SYS/BIOS by configuring the ti.sysbios.family.c28.TimestampProvider module. By default, SYS/BIOS uses the first two available timers for the Clock and Timestamp modules. You can specify that the Timestamp module should use, for example, Timer 2 with the following configuration code (in .cfg file):
var TimestampProvider = xdc.useModule('ti.sysbios.family.c28.TimestampProvider');
TimestampProvider.timerId = 2;
The following configuration code causes the Timestamp module to use the same 28x timer as the Clock module:
var TimestampProvider = xdc.useModule('ti.sysbios.family.c28.TimestampProvider');
TimestampProvider.useClockTimer = true;
Sharing the Clock timer leaves more timers available for other uses, but makes the Timestamp APIs less efficient. If you use the Clock timer for timestamps, the timestamp is calculated as: (Clock ticks) x (tick period) + (current timer count) As a result, the maximum value of the timestamp is limited to 2^32 x (Clock tick period).
(Clock ticks) x (tick period) + (current timer count)
2^32 x (Clock tick period)
If you use a separate timer for the timestamp (the default behavior), the maximum value of the timestamp is 2^64 and the multiplication operation is not required in order to retrieve the value.
2^64
For Concerto devices, you should use the ti.sysbios.family.[c28|arm].f28m35x.TimestampProvider modules, which access the shared timestamp counter that can be read by either the 28x or M3 core.
Internally, the 28x timers count downward from "period" to 0; however, the Timer_getCount() API subtracts the timer counter value from the period so that it counts upward instead of downward.
The ti.sysbios.family.c28.Timer module configuration lets you specify a mask to identify the CPU timers that are available for use by the Timer module. By default, this mask is set to 7 (111 in binary), which means that Timers 0, 1, and 2 are available. Timers used by SYS/BIOS need not be omitted from this mask, but if your application uses a specific 28x timer, you should omit the bit for that timer from this mask.
If you create an instance of the device-specific ti.sysbios.family.c28.Timer module (rather than an instance of the generic ti.sysbios.hal.Timer module), you can also configure the 28x parameters circled below:
The Timer Id parameter specifies which 28x CPU timer should be used for this instance. If you choose ANY, the first available timer is used. Remember that 28x devices have 3 timers and SYS/BIOS uses 2 by default.
The Prescale factor parameter sets the length of a timer tick using the 28x device's 16-bit prescaler. If a prescale factor of 10 is specified, a timer tick will occur every 11 cycles. If this timer is used as a counter, the prescale factor determines the period between counts. Otherwise, the prescale factor can be used to achieve longer timer periods; with a prescale specified, the actual period is period * prescale+1
period * prescale+1
The 28x Timer module provides the following APIs to access the prescaler:
The Free run and Soft stop parameters let you specify how a timer behaves at a software breakpoint, like those you can set in Code Composer Studio. If the "free" flag is set to 1, the timer will continue to run normally when the program halts at a software breakpoint; the value of the "soft" flag doesn't matter if "free" is set to 1. If "free" is 0 and "soft" is 1, the timer will run down to 0 and then stop. When "free" is 0 and "soft" is 0 (the default), the timer halts at software breakpoints.
For example, the following configuration code creates a 28x Timer with a period of 2000 microseconds and a prescale value of 999. As a result, the Timer ticks every 1000 cycles (prescale+1) and the actual period for running the myTimerFxn() function is 2,000,000 microseconds (2 seconds). When a software breakpoint occurs in Code Composer Studio, this Timer will continue to run.
var ti_sysbios_family_c28_Timer = xdc.useModule('ti.sysbios.family.c28.Timer');
var my28xTimerParams = new ti_sysbios_family_c28_Timer.Params();
my28xTimerParams.instance.name = "my28xTimer";
my28xTimerParams.period = 2000;
my28xTimerParams.prescale = 999;
my28xTimerParams.emulationModeInit.free = 1;
my28xTimerParams.emulationModeInit.soft = 0;
Program.global.my28xTimer = ti_sysbios_family_c28_Timer.create(null, "&myTimerFxn", my28xTimerParams);
As with other device-specific Timer modules, you can also specify the creation parameters for the Hwi object to be triggered by this timer interrupt.
Remember that sizes on 28x devices are measured in 16-bit words. The Minimum Addressable Data Unit (MADU) is 16 bits.
Because the amount of memory available on 28x devices is relatively small, reducing the amount of memory used by applications is likely to be important. You may encounter errors when you build an application if the application footprint is too large to fit in RAM. See the "Minimizing the Application Footprint" appendix of the SYS/BIOS User's Guide (SPRUEX3) for a number of ways to reduce the amount of memory used by SYS/BIOS.
Since the 28x RAM is limited, you may want to consider running the application from Flash memory and copying only critical sections to RAM for faster execution.
The System stack and all Task stacks must be located within Page 0 in memory, which has a memory address of 0 to 0xffff. An error is raised if a Task stack is placed in some other memory location. By default, Task stacks are placed in the .ebss:taskStackSection section, which is on Page 0. This section is a subsection of .ebss to allow SYS/BIOS to be used with the CCS-supplied linker .cmd files for the 28x devices.
The .taskStackSection contains the stacks for statically created tasks. The size of the .taskStackSection is calculated as (Task.defaultStackSize * number of static tasks) + system heap size.
(Task.defaultStackSize * number of static tasks) + system heap size
To reduce the size of this section, you can do the following:
As you decrease heap and stack sizes, you'll need to watch for "out of memory" errors from heaps and overrunning your stacks. The ROV tool can be helpful in monitoring stack and heap usage.. | https://e2e.ti.com/support/microcontrollers/c2000/f/171/t/953170 | CC-MAIN-2021-10 | en | refinedweb |
Attention: Deprecation notice for Bintray, JCenter, GoCenter and ChartCenter. Learn More
github.com/jainishshah17/tugger
Tugger
What does Tugger do?
Tugger is Kubernetes Admission webhook to enforce pulling of docker images from private registry.
Prerequisites
Kubernetes 1.9.0 or above with the
admissionregistration.k8s.io/v1 API enabled. Verify that by.
Build and Push Tugger Docker Image
# Build docker image docker build -t jainishshah17/tugger:0.1.1 . # Push it to Docker Registry docker push jainishshah17/tugger:0.1.1
Create Kubernetes Docker registry secret
# Create a Docker registry secret called 'regsecret' kubectl create secret docker-registry regsecret --docker-server=${DOCKER_REGISTRY} --docker-username=${DOCKER_USER} --docker-password=${DOCKER_PASS} --docker-email=${DOCKER_EMAIL}
Note: Create Docker registry secret in each non-whitelisted namespaces.
Generate TLS Certs for Tugger
./tls/gen-cert.sh
Get CA Bundle
./webhook/webhook-patch-ca-bundle.sh
Deploy Tugger to Kubernetes
Deploy using Helm Chart
The helm chart can generate certificates and configure webhooks in a single step. See the notes on webhooks below for more information.
helm install --name tugger \ --set docker.registrySecret=regsecret, \ --set docker.registryUrl=jainishshah17, \ --set whitelistNamespaces={kube-system,default}, \ --set whitelistRegistries={jainishshah17} \ --set createValidatingWebhook=true \ --set createMutatingWebhook=true \ chart/tugger
Deploy using kubectl
Create deployment and service
# Run deployment kubectl create -f deployment/tugger-deployment.yaml # Create service kubectl create -f deployment/tugger-svc.yaml
Configure
MutatingAdmissionWebhookand
ValidatingAdmissionWebhook
Note: Replace
${CA_BUNDLE}with value generated by running
./webhook/webhook-patch-ca-bundle.sh
# re MutatingAdmissionWebhook kubectl create -f webhook/tugger-mutating-webhook ration.yaml
Note: Use MutatingAdmissionWebhook only if you want to enforce pulling of docker image from Private Docker Registry e.g JFrog Artifactory. If your container image is
nginxthen Tugger will append
REGISTRY_URLto it. e.g
nginxwill become
jainishshah17/nginx
# Configure ValidatingWebhookConfiguration kubectl create -f webhook/tugger-validating-webhook ration.yaml
Note: Use ValidatingWebhookConfiguration only if you want to check pulling of docker image from Private Docker Registry e.g JFrog Artifactory. If your container image does not contain
REGISTRY_URLthen Tugger will deny request to run that pod.
Test Tugger
# Deploy nginx kubectl apply -f test/nginx.yaml
Configure
The mutation or validation policy can be defined as a list of rules in a YAML file.
The YALM file can be specified with the command line argument
--policy-file=FILE, or when using the Helm chart, populate
rules: in values.
Schema
rules: - pattern: regex replacement: template (optional) condition: policy (optional) - ...
pattern is a regex pattern
replacement is a template comprised of the captured groups to use to generate the new image name in the mutating admission controller. When replacement is
null or undefined, the image name is allowed without patching. Rules with this field are ignored by the validating admission controller, where mutation is not supported.
condition is a special condition to test before committing the replacement. Initially
Always and
Exists will be supported.
Always is the default and performs the replacement regardless of any condition.
Exists implements the behavior from #7; it only rewrites the image name if the target name exists in the remote registry.
Each rule will be evaluated in order, and if the list is exhausted without a match, the admission controller will return
allowed: false.
Examples
This example allows all images without rewriting:
rules: - pattern: .*
This example implements the default behavior of rewriting all image names to start with
jainishshah17:
rules: - pattern: ^jainishshah17/.* - pattern: (.*) replacement: jainishshah17/$1
Or the same thing, but only if the image exists in
jainishshah17/, and allowing all other images:
rules: - pattern: ^jainishshah17/.* - pattern: (.*) replacement: jainishshah17/$1 condition: Exists - pattern: .*
Allow the nginx image, but rewrite everything else:
rules: - pattern: ^nginx(:.*)?$ - pattern: (?:jainishshah17)?(.*) replacement: jainishshah17/$1 | https://search.gocenter.io/github.com/jainishshah17/tugger | CC-MAIN-2021-10 | en | refinedweb |
Welcome Source,
- How to write the perfect pull request.
- Visual Studio 2015 CTP 5 is released.
- Up close with the HoloLens, Microsoft’s most intriguing product in years.
- Akka.NET – One Year Later.
Videos/Presentations/Courses
- The F# Path to Relaxation – Don Syme (+slides)
- F#, Property Based Testing With FsCheck – Andrea Magnorsky
- Functional programming design patterns with Scott Wlaschin
- Learn about creating DSLs in F# 2 – Matthew Sottile
- F# News – January 2015 – Troy Kershaw
- Programming in Elixir with Bryan Hunter
- Ford Keynote – CES 2015 (Glimpse of Xamarin / F# Android app – 18 m 40 sec)
Blogs
- A functional approach to authorization – Scott Wlaschin
- Constraining capabilities based on identity and role – Scott Wlaschin
- Using types as access tokens – Scott Wlaschin
- Enigma Machine – Type Provider Edition – Ross McKinlay
- How to get pragmatists to use F# – Arthur Johnston
- MBrace F# large-scale distributed computation progress report and looking for OSS collaborators – Eirik Tsarpalis
- Futures in F# – Mark Watts
- F# Record Types with SqlProvider Code-Last – Jamie Dixon
- Step-5 Advanced Search DSL Using FParsec – Tamizh Vendan
- Averages are not good enough (F#) – Jef Claes
- Two Track Coding (ROP) For Dummies – Part 1 – David Crook
- Two Track Coding (ROP) For Dummies – Part 2 – David Crook
- Beginners quick guide to setup FsBlog and start to blog in 5 minutes – Tomasz Jaskula
- Cyclic Data References in F# – Frank Joppe
- EdLambda 10/02/2015 – Simon Fowler
- F# Higher Order Functions (List) Part3 – Michael Coxeter
- F# type signature gotchas – David Tchepak
- F# Async: Plays well with others? – Ian Voyce
- Programming WatchKit with F# – Larry O’Brien
- F# signatures, a helpful debugging aid – Michael Coxeter
- Stormin’ F# – Faisal Waris
- Minimize mental overhead for code readers – Richard Dalton
- Listen und Morphismen – Carsten König
- Small Basic On Mac & Linux – Phillip Trelford
- Debugging Small Basic Apps In Visual Studio – Phillip Trelford
- A minimal full bitcoin node in F# – Talkera
- Aprender programación funcional con F# (1) – Iwan van der Kleijn
- Wo das lambda herkommt – Carsten König
F# vNext News
- Recently accepted PRs:
- Recently proposed RPs:
- Show warning when DU is accessed without type but RequiredQualifiedAccess was set #103
- Fix name-demangleling of provided types #102
- [WIP] Migrate Cambridge test suite (tests/fsharp) runner to NUnit #90
- Quickfix for #9 and #10 #87
- Enable codegen for exception filters by default #66
- Fix #74: Add Checked.int8/uint8 and Nullable.int8/uint8/single/double #19
- Recently proposed ideas:
- Remove “method” from reserved keyword list
- Connect FSI on breakpoint with environment available.
- Define a function the same way as define lambda
- Allow us to bind names multiple times in a pattern match
- Add Checked.int8/uint8 and Nullable.int8/uint8/single/double
- Use the default keyword instead of the [<DefaultValue>] attribute
- Extend the set of expressions supported in provided methods
- Restrict “private” for items in namespaces to mean “private to the namespace declaration group” #43
- Support for [<CLIEvent>] on modules
New releases
- FsLab 0.1.3 (with integrated R provider + Deedle, working on Mac)
- Paket 0.25.1
- FSharp.TypeProviders.StarterPack 1.1.3.56
- FSharp.Formatting 2.6.3
- Hopac 0.0.0.38
- EventStore.Client.FSharp 3.1.6
- Logary 2.4.1
- Yaaf.FSharp.Scripting 1.0.1
- NLog.FSharp 3.2.0
- Serilog.Extras.FSharp 1.4.139
- Ext.Direct.Mvc.Fsharp 1.0.0
That’s all for now. Have a great week.
Previous F# Weekly edition – #3
One thought on “F# Weekly #4, 2015” | https://sergeytihon.com/2015/01/26/f-weekly-4-2015/ | CC-MAIN-2021-10 | en | refinedweb |
Functional plumbing for Python
Complete documentation in full color_.
.. image:: :target:
pipetoolsis a python package that enables function composition similar to using Unix pipes.
Inspired by Pipe_ and Околомонадное_ (whatever that means...)
.. _Pipe: .. _Околомонадное:
It allows piping of arbitrary functions and comes with a few handy shortcuts.
Source is on github_.
.. _github:))).
Say you want to create a list of python files in a given directory, ordered by filename length, as a string, each file on one line and also with line numbers:
.. code-block:: pycon
>>> print pyfiles_by_length('../pipetools') 0. main.py 1. utils.py 2. __init__.py 3. ds_builder.py
So you might write it like this:
.. code-block:: python:
.. code-block:: python
def pyfiles_by_length(directory): return '\n'.join('{0}. {1}'.format(*x) for x in enumerate(sorted( [f for f in os.listdir(directory) if f.endswith('.py')], key=len)))
Or, if you're a mad scientist, you would probably do it like this:
.. code-block:: pythongive you yet another possibility!
.. code-block:: python.
.. _
The Right Way™:
.. code-block:: console
$ pip install pipetools
Uh, what's that?_
.. _the-pipe:
The pipe """""""" The
pipeobject can be used to pipe functions together to form new functions, and it works like this:
.. code-block:: python
from pipetools import pipe
f = pipe | a | b | c
f(x) == c(b(a(x)))
A real example, sum of odd numbers from 0 to x:
.. code-block:: python
from functools import partial from pipetools import pipe
odd_sum = pipe | range | partial(filter, lambda x: x % 2) | sum
odd_sum(10) # -> 25
Note that the chain up to the
sumis lazy.
Automatic partial application in the pipe """""""""""""""""""""""""""""""""""""""""
As partial application is often useful when piping things together, it is done automatically when the pipe encounters a tuple, so this produces the same result as the previous example:
.. code-block:: python
odd_sum = pipe | range | (filter, lambda x: x % 2) | sum
As of
0.1.9, this is even more powerful, see
X-partial_.
Built-in tools """"""""""""""
Pipetools contain a set of pipe-utils that solve some common tasks. For example there is a shortcut for the filter class from our example, called
where()_:
.. code-block:: python
from pipetools import pipe, where
odd_sum = pipe | range | where(lambda x: x % 2) | sum
Well that might be a bit more readable, but not really a huge improvement, but wait!
If a pipe-util is used as first or second item in the pipe (which happens quite often) the
pipeat the beginning can be omitted:
.. code-block:: python
odd_sum = range |!
.. code-block:: python
from pipetools import where, X
odd_sum = range | where(X % 2) | sum
How 'bout that.
Read more about the X object and it's limitations._
.. _auto-string-formatting:
Automatic string formatting """""""""""""""""""""""""""
Since it doesn't make sense to compose functions with strings, when a pipe (or a
pipe-util) encounters a string, it attempts to use it for
(advanced) formatting:
.. code-block:: pycon
>>> countdown = pipe | (range, 1) | reversed | foreach('{0}...') | ' '.join | '{0} boom' >>> countdown(5) u'4... 3... 2... 1... boom'
.. _(advanced) formatting:
Feeding the pipe """"""""""""""""
Sometimes it's useful to create a one-off pipe and immediately run some input through it. And since this is somewhat awkward (and not very readable, especially when the pipe spans multiple lines):
.. code-block:: python
result = (pipe | foo | bar | boo)(some_input)
It can also be done using the
>operator:
.. code-block:: python
result = some_input > pipe | foo | bar | boo
.. note:: Note that the above method of input won't work if the input object defines
__gt___ for any object - including the pipe. This can be the case for example with some objects from math libraries such as NumPy. If you experience strange results try falling back to the standard way of passing input into a pipe.
See the
full documentation_. | https://xscode.com/0101/pipetools | CC-MAIN-2021-10 | en | refinedweb |
.
Answer:
I had a similar issue in the OnCreate of my Activity.
The adapter was set up with the correct count and I
applied setCurrentItem after setting the adapter to the
ViewPager however is would return index out of bounds. I think the ViewPager had not loaded all my Fragments at the point i set the current item. By posting a runnable on the ViewPager i was able to work around this. Here is an example with a little bit of context.
// Locate the viewpager in activity_main.xml final ViewPager viewPager = (ViewPager) findViewById(R.id.pager); // Set the ViewPagerAdapter into ViewPager viewPager.setAdapter(new ViewPagerAdapter(getSupportFragmentManager())); viewPager.setOffscreenPageLimit(2); viewPager.post(new Runnable() { @Override public void run() { viewPager.setCurrentItem(ViewPagerAdapter.CENTER_PAGE); } });
Answer:
I found a very simple workaround for this:
if (mViewPager.getAdapter() != null) mViewPager.setAdapter(null); mViewPager.setAdapter(mPagerAdapter); mViewPager.setCurrentItem(desiredPos);
And, if that doesn’t work, you can put it in a handler, but there’s no need for a timed delay:
new Handler().post(new Runnable() { @Override public void run() { mViewPager.setCurrentItem(desiredPos); } });
Answer:
I had similar bug in the code, the problem was that I was setting the position before changing the data.
The solution was simply to set the position afterwards and notify the data changed
notifyDataSetChanged() setCurrentItem()
Answer:
I have the same problem and I edit
@Override public int getCount() { return NUM_PAGES; }
I set
NUM_PAGES be mistake to 1 only.
Answer:
I’ve used the post() method described here and sure enough it was working great under some scenarios but because my data comes from the server, it was not the holy grail.
My problem was that i want to have
notifyDataSetChanged
called at an arbitrary time and then switch tabs on my viewPager. So right after the notify call i have this
ViewUtilities.waitForLayout(myViewPager, new Runnable() { @Override public void run() { myViewPager.setCurrentItem(tabIndex , false); } });
and
public final class ViewUtilities { public static void waitForLayout(final View view, final Runnable runnable) { view.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() { @Override public void onGlobalLayout() { //noinspection deprecation view.getViewTreeObserver().removeGlobalOnLayoutListener(this); runnable.run(); } }); } }
Fun fact: the //noinspection deprecation at the end is because there is a spelling mistake in the API that was fixed after API 16, so that should read removeOnGlobalLayoutListener instead of removeGlobalOnLayoutListener
This seems to be covering all cases for me.
Answer:
Solution (in Kotlin with ViewModel etc.) for those trying to set the current item in the
onCreate of
Activity without the hacky
Runnable “solutions”:
class MyActivity : AppCompatActivity() { lateinit var mAdapter: MyAdapter lateinit var mPager: ViewPager // ... override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.fragment_pager) // ... mainViewModel = ViewModelProviders.of(this).get(MainViewModel::class.java) mAdapter = MyAdapter(supportFragmentManager) mPager = findViewById(R.id.pager) mainViewModel.someData.observe(this, Observer { items -> items?.let { // first give the data to the adapter // this is where the notifyDataSetChanged() happens mAdapter.setItems(it) mPager.adapter = mAdapter // assign adapter to pager mPager.currentItem = idx // finally set the current page } })
This will obviously do the correct order of operations without any hacks with
Runnable or delays.
For the completeness, you usually implement the
setItems() of the adapter (in this case
FragmentStatePagerAdapter) like this:
internal fun setItems(items: List<Item>) { this.items = items notifyDataSetChanged() }
Answer:
This is a lifecycle issue, as pointed out by several posters here. However, I find the solutions with posting a
Runnable to be unpredictable and probably error prone. It seems like a way to ignore the problem by posting it into the future.
I am not saying that this is the best solution, but it definitely works without using
Runnable. I keep a separate integer inside the
Fragment that has the
ViewPager. This integer will hold the page we want to set as the current page when
onResume is called next. The integer’s value can be set at any point and can thus be set before a
FragmentTransaction or when resuming an activity. Also note that all the members are set up in
onResume(), not in
onCreateView().
public class MyFragment extends Fragment { private ViewPager mViewPager; private MyPagerAdapter mAdapter; private TabLayout mTabLayout; private int mCurrentItem = 0; // Used to keep the page we want to set in onResume(). @Nullable @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.my_layout, container, false); mViewPager = (ViewPager) view.findViewById(R.id.my_viewpager); mTabLayout = (TabLayout) view.findViewById(R.id.my_tablayout); return view; } @Override public void onResume() { super.onResume(); MyActivity myActivity = (MyActivity) getActivity(); myActivity.getSupportActionBar().setTitle(getString(R.string.my_title)); mAdapter = new MyPagerAdapter(getChildFragmentManager(), myActivity); mViewPager.setAdapter(mAdapter); mViewPager.setOffscreenPageLimit(PagerConstants.OFFSCREEN_PAGE_LIMIT); mViewPager.setCurrentItem(mCurrentItem); // <-- Note the use of mCurrentItem here! mTabLayout.setupWithViewPager(mViewPager); } /** * Call this at any point before needed, for example before performing a FragmentTransaction. */ public void setCurrentItem(int currentItem) { mCurrentItem = currentItem; // This should be called in cases where onResume() is not called later, // for example if you only want to change the page in the ViewPager // when clicking a Button or whatever. Just omit if not needed. mViewPager.setCurrentItem(mCurrentItem); } }
Answer:
For me this worked setting current item after setting adapter
viewPager.setAdapter(new MyPagerAdapter(getSupportFragmentManager())); viewPager.setCurrentItem(idx); pagerSlidingTabStrip.setViewPager(viewPager);// assign viewpager to tabs
Answer:
some guy wrote on forums here. worked for me
if (mViewPager.getAdapter() != null) mViewPager.setAdapter(null); mViewPager.setAdapter(mPagerAdapter); mViewPager.setCurrentItem(desiredPos);
Answer:
I’ve done it this way to restore the current item:
@Override protected void onSaveInstanceState(Bundle outState) { if (mViewPager != null) { outState.putInt(STATE_PAGE_NO, mViewPager.getCurrentItem()); } super.onSaveInstanceState(outState); } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { if (savedInstanceState != null) { mCurrentPage = savedInstanceState.getInt(STATE_PAGE_NO, 0); } super.onRestoreInstanceState(savedInstanceState); } @Override protected void onRestart() { mViewPager.setCurrentItem(mCurrentPage); super.onRestart(); }
Answer:
By the time I call
setCurrentItem() the view is about to be recreated. So in fact I invoke
setCurrentItem() for the viewpager and afterwards the system calls
onCreateView() and hence creates a new viewpager.
This is the reason for me why I do not see any changes. And this is the reason why a
postDelayed() may help.
Theoretical solution: Postpone the
setCurrentItem() invocation until the view has been recreated.
Practical solution: I have no clue for a stable and simple solution. We should be able to check if the class is about to recreate it’s view and if that is the case postpone the invocation of
setCurrentItem() to the end of
onCreateView()
Answer:
I was working on this problem for one week and I realized that this problem happens because I was using home activity context in view pager fragments and we can only use context in fragment after it gets attached to activity..
When a view pager gets created, activity only attach to the first (0) and second (1) page. When you open second page, the third page gets attached and so on! When you use
setCorrentItem() method and the argument is greater than 1, it wants to open that page before it is attached, so the context in fragment of that page will be null and the application gets crashed! That’s why when you delay
setCorrentItem(), it works! At first it gets attached and then it’ll open the page…
Answer:
A modern approach in a
Fragment or
Activity is to call
ViewPager.setcurrentItem(Int) function in a coroutine in the context of
Dispatchers.Main :
lifecycleScope.launch(Dispatchers.Main) { val index = 1 viewPager.setCurrentItem(index) }
Answer:
I use the dsalaj code as a reference. If necessary I share the code with the complete solution.
I also strongly recommend using ViewPager2
Solution
Both cases have to go within the
Observer {}:
First case: Initialize the adapter only when we have the first data set and not before, since this would generate inconsistencies in the paging. To the first data set we have to pass it as the argument of the Adapter.
Second case: From the first change in the observable we would have from the second data sets onwards which have to be passed to the Adapter through a public method only if we have already initialized the adapter with a first data set.
GL
Answer:
CLEAN AND SIMPLE
No need to add post method just setCurrentItem after calling notifyDataSetChanged().
Answer:
You need to call pager.setCurrentItem(activePage) right after pager.setAdapter(buildAdapter())
@Override public void onResume() { if (pager.getAdapter() != null) { activePage=pager.getCurrentItem(); Log.w(getClass().getSimpleName(), "pager.getAdapter()!=null"); pager.setAdapter(null); } pager.setAdapter(buildAdapter()); pager.setCurrentItem(activePage); }
Tags: androidandroid, methods, view | https://exceptionshub.com/methods-android-viewpager-setcurrentitem-not-working-after-onresume.html | CC-MAIN-2021-10 | en | refinedweb |
I have created the new React application by just using create-react-app and I amtrying to write a unit test to the component named "MessageBox" that I have created in the application.
I have also added the file under my 'src' folder named 'setupTests.js' with the below content:
import * as msenzyme from 'enzyme';
import * as Adapter from 'enzyme-adapter-react-16';
msenzyme.configure({ adapter: new Adapter() });
I ran it by below command :
npm test
But I got the below error:
Enzyme Internal Error: Enzyme expects an adapter to be configured, but found none. To configure an adapter, you should call msenzyme.configure({ > adapter: new Adapter() })
Does someone know how can I solve this problem?
Many of people will tell you to import setupTests.js into your test file. Or to configure enzyme adapter in each of the test file. This will solve your immediate problem only.
But for the long term, if you add the jest.config.js file to your project root. Then you can configure it to run the setup file on launch as shown below:
module.exports = {
setupTestFrameworkScriptFile: "<rootDir>/src/setupTests.ts"
}
This will tell the Jest to run setupTest.ts every time it is launched.
This way if you need to add the polyfills and add global mock like the localstorage then you can add them to your setupTests file and it will be configured everywhere. | https://kodlogs.com/34575/enzyme-internal-error-enzyme-expects-an-adapter-to-be-configured-but-found-none | CC-MAIN-2021-10 | en | refinedweb |
Floating action buttonsFloating action buttons
A floating action button (FAB) represents the primary action of a screen.
There are three types of FABS:
Using FABsUsing FABs
A FAB performs the primary, or most common, action on a screen. It appears in front of all screen content, typically as a circular shape with an icon in its center.
InstallationInstallation
npm install @material/fab
StylesStyles
@use "@material/fab"; @include fab.core-styles;
We recommend using Material Icons from Google Fonts:
<head> <link rel="stylesheet" href=""> </head>
However, you can also use SVG, Font Awesome, or any other icon library you wish.
JavaScript instantiationJavaScript instantiation
The FAB will work without JavaScript, but you can enhance it to have a ripple effect by instantiating
MDCRipple on the root element. See MDC Ripple for details.
import {MDCRipple} from '@material/ripple'; const fabRipple = new MDCRipple(document.querySelector('.mdc-fab'));
See Importing the JS component for more information on how to import JavaScript.
Making FABs accessibleMaking FABs accessible
Material Design spec advises that touch targets should be at least 48px x 48px. While the FAB is 48x48px by default, the mini FAB is 40x40px. Add the following to meet this requirement for mini FABs:
<div class="mdc-touch-target-wrapper"> <button class="mdc-fab mdc-fab--mini mdc-fab--touch"> <div class="mdc-fab__ripple"></div> <span class="material-icons mdc-fab__icon">add</span> <div class="mdc-fab__touch"></div> </button> </div>
Note: The outer
mdc-touch-target-wrapper element is only necessary if you want to avoid potentially overlapping touch targets on adjacent elements (due to collapsing margins).
Regular FABsRegular FABs
Regular FABs are FABs that are not expanded and are a regular size.
Regular FAB exampleRegular FAB example
<button class="mdc-fab" aria- <div class="mdc-fab__ripple"></div> <span class="mdc-fab__icon material-icons">favorite</span> </button>
Note: The floating action button icon can be used with a
span,
i,
img, or
svg element.
Note: IE 11 will not center the icon properly if there is a newline or space after the material icon text.
Mini FABsMini FABs
A mini FAB should be used on smaller screens.
Mini FABs can also be used to create visual continuity with other screen elements.
Mini FAB exampleMini FAB example
<button class="mdc-fab mdc-fab--mini" aria- <div class="mdc-fab__ripple"></div> <span class="mdc-fab__icon material-icons">favorite</span> </button>
Extended FABsExtended FABs
The extended FAB is wider, and it includes a text label.
Extended FAB exampleExtended FAB example
<button class="mdc-fab mdc-fab--extended"> <div class="mdc-fab__ripple"></div> <span class="material-icons mdc-fab__icon">add</span> <span class="mdc-fab__label">Create</span> </button>
Note: The extended FAB must contain label where as the icon is optional. The icon and label may be specified in whichever order is appropriate based on context.
APIAPI
CSS classesCSS classes
A note about
:disabled: No disabled styles are defined for FABs. The FAB promotes action, and should not be displayed in a disabled state. If you want to present a FAB that does not perform an action, you should also present an explanation to the user.
Sass mixinsSass mixins
Basic Sass mixinsBasic Sass mixins
MDC FAB uses MDC Theme's
secondary color by default. Use the following mixins to customize it.
Advanced Sass mixinsAdvanced Sass mixins
A note about advanced mixins: The following mixins are intended for advanced users. These mixins will override the color of the container, ink, or ripple. You can use all of them if you want to completely customize a FAB. Or you can use only one of them, e.g. if you only need to override the ripple color. It is up to you to pick container, ink, and ripple colors that work together, and meet accessibility standards. | https://www.npmjs.com/package/@material/fab | CC-MAIN-2021-10 | en | refinedweb |
view raw
I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b). | https://codedump.io/share/2xXpD7nRnZrR/1/better-way-to-shuffle-two-numpy-arrays-in-unison | CC-MAIN-2017-22 | en | refinedweb |
CodePlexProject Hosting for Open Source Software
Hi,
I'm attempting to get a title for the content of each page but reference model.title in layout.cshtml.
Is this possible? If not then could someone tell me how to go about acheiving this?
My problem is that the html involved in making up the page(s) requires me to place some of the structure in layout.cshtml, this being the place where i need to display model.title
Any help is much appreiciated.
cheers,
George
I needed to this as well, it's not as straightforward as it sounds :)
The title is stored in the RoutablePart. This means the template that generates it is Parts_RoutableTitle. (This shape is generated from the RoutablePartDriver)
The way I achieved this was by pushing Parts_RoutableTitle into the Header zone of my layout. Although is there a specific reason why you just want the title as a string rather than the actual template?
Edit: Fixed plurals
I am trying to do this but add Part_RoutableTitle into the BeforeContent zone
WorkContext.Layout.BeforeContent.Add(New.Part_RoutableTitle(), "10"); // page title
I dunno if this is correct but it does not generate an error. how do I now access this in layout.cshtml ?
Part_RoutableTitle
In layout it should display with this line:
@Zone("BeforeContent")
That will render anything in the BeforeContent zone, assuming what you did worked.
I'm just wondering where exactly did you add that code?
I ended up getting it working by supplementing my own RoutePart driver but I've since discovered there are probably simpler ways especially if you needed to do a lot of this stuff:
public class RoutePartDriver : ContentPartDriver<RoutePart> { private IWorkContextAccessor _workContextAccessor; public RoutePartDriver(IWorkContextAccessor workContextAccessor) { _workContextAccessor = workContextAccessor; } protected override DriverResult Display(RoutePart part, string displayType, dynamic shapeHelper) { var context = _workContextAccessor.GetContext(); if (displayType == "Detail") { var headShape1 = shapeHelper.Parts_Header_RoutableTitle(ContentPart: part, Title: part.Title, Path: part.Path); context.Layout.Zones["Header"].Add(headShape1, "1"); } return new DriverResult(); } }
Also note that I've named the shape Parts_Header_RoutableTitle. This meant I could use Placement.info to hide the original Parts_RoutableTitle as well as adjusting the position of this one if I needed.
I reached the above method after discussion in this thread:
You can see this other thread for a possibly better way (towards the end):
But, I haven't yet tested that last method since I've already got everything working how I want with drivers.
BTW it should be Parts plural, I got it wrong in my first reply.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://orchard.codeplex.com/discussions/252561 | CC-MAIN-2017-22 | en | refinedweb |
Inside Sablotron: Virtual XML Documentsby Petr Cimprich
March 13, 2002
Whatever internal representation is used, one still needs a convenient interface to access it. The interface needn't be published, as it is typically used for internal purposes only; however, it's hard to imagine a maintainable and extensible XML processor implementation without a well-defined interface to its temporary storage. Beside the fast native interface optimized for specific purposes, the processor can also provide other, standard interfaces allowing to access to documents once they've been parsed. This is true of Sablotron, an Open Source XML processor project I'm currently involved in. I use it here to illustrate the possibilities of XML processors, some of them not deployed quite yet. But back to internals and interfaces; Sablotron uses its own in-memory objects and a set of interface functions optimized for XPath and XSLT, but parsed documents can be accessed and modified via a subset of the standard DOM API at the same time.
Taking it virtual
We have so far considered interfaces as a way to deal with parsed trees stored in memory, but we can envision an interface to structures other than an in-memory tree. An XML processor can be made to register a handler providing user-defined functions of its internal interface. This handler would then be recognized as an external analogy to a parsed XML document. We need not care about its real nature; as long as there is an interface defined via callback functions, we are able to process the virtual document in a manner similar to the internal ones. The processor design requires one more thin layer to be defined -- each task must be expressed in terms of generalized interface functions, calling either internal functions for internal documents or the handler functions for external documents. Given the complete functionality of the processor implemented using the generalized interface, we are then able to involve data managed by external handlers in XPath, XSLT, XQuery, and DOM operations, and so on. Even DOM access to virtual documents from XSLT extension functions is possible in principle.
Architecture of Virtual XML Documents
Now we are able to use virtual XML documents wherever common in-memory parsed documents can be used. The question remains: what is it good for? What kind of handlers could be used and why? We can implement handlers working as interfaces to other DOM processors, which would allow access to documents parsed and cached by third-party software. This sounds interesting, but the only result of this experiment would be to slow down processing.
On the other hand, some pretty useful external handlers could be employed. Consider a handler providing an access to XML documents stored in a RDBMS or native XML database. We would be able to perform XPath queries or transform those documents directly from the DB, without extracting whole documents. In the case of large XML documents, we can expect a significant acceleration of XPath queries and template-driven XSLT transformations. Moreover, we could work with persistent storage using standard XML technologies; good news for developers already familiar with these standards. This kind of handler would certainly be a useful feature, especially for the XML-enabled RDBMS.
Virtual XML documents generated dynamically from multiple sources or stored in several files appear to be another field of interest for our approach. External handlers also make it possible to process documents too large to be stored in memory; nodes aren't accessed before they are needed (if at all). An implementation of convenient handlers isn't always quite trivial, but it often pays in the long run. The benefits of XPath, XSLT, DOM and possibly other ways to deal with arbitrary XML tree representations are well worth implementing a few callback functions.
Working with Sablotron, we have experimented with external handlers to see if it really works. Sablotron enables users to register a handler, a set of callback functions, and to evaluate XPath queries on virtual documents accessed via those functions. A low-level interface implemented in C makes it possible to define callbacks, a query context (namespace declarations and variable bindings), and the current node for the query. This feature (called SXP, Sablotron XPath Processor) is well tested and can be used in production systems. The processor core is also ready to extend the support of external documents to XSLT. However, since the DOM interface to Sablotron isn't implemented using the generalized interface, a DOM API for external documents isn't available currently.
In summary, since the interface working as a base for XPath querying and XSLT transformations can be replaced with user-defined callback functions, external handlers can be used to get an arbitrary XML representation passed to XPath/XSLT directly. What this approach promises is a notable speed increase and a memory consumption decrease when compared to building whole documents. If you would like to experiment with this, I invite you to try out Sablotron. I'm not aware of any other XML processor supporting external handlers currently; information on a similar effort or your experiences with the XPath/XSLT/DOM via callbacks is welcomed.
Ask questions of the author or comment on his ideas in our forum.
(* You must be a member of XML.com to use this feature.)
Comment on this Article
- Other virtual XML document projects
2002-03-15 21:39:54 Sandy Klausner [Reply]
Emerging XML technology is falling short of meeting Web services requirements because (a) it lacks a robust way to capture context, (b) lacks semantic clarity, (c) is inefficient to transport and (d) requires costly equipment to process. CoreOne addresses these shortcomings by specifying context in three representations simultaneously: a) a graphical representation for design and production use; b) a document representation for end user use; and c) a binary representation for transport and for presentation (e.g. PDA) processing. Transport occurs in compact packets that cannot be decrypted; presentation occurs without the parsing, validation, or unmarshalling required for the Document Object Model (DOM). Presentation processing from a binary object is suitable for silicon implementations. The existing paradigm of relational database and markup languages is replaced with object composition and ClearText technologies. Object composition natively represents and process nested structures without the inefficiencies of table joins. ClearText provides a robust bi-directionally linked document model with advanced analytic encoding capabilities.
- XSLT on virtual documents supported
2002-03-15 05:51:18 Petr Cimprich [Reply]
An update: It was possible to evaluate XPath queries on virtual documents only at the time of writing of this article. We have moved forward in the meantime. The latest CVS version of Sablotron can run XSLT on external documents as well. This feature should be included in the very next release (0.9x, where x>0).
- XSLT on virtual documents supported
2002-03-21 18:57:05 Keith Fligg [Reply]
Are there any examples of how to use a virtual XML document with the XSLT processor? Anything at all would be most useful.
Thanks in advance,
- Keith
- XSLT on virtual documents supported
2002-03-25 05:03:54 Pavel Kroh [Reply]
Hello all,
Keith has contacted our mailing list too, our answer to his message is at
Shortly from the answer: version capable to process external documents is on CVS, but not yet on the main trunk, check-out instructions are included in the message, as well as some documentation updates that reflect the latest changes.
Best regards,
Pavel. | http://www.xml.com/pub/a/2002/03/13/sablotron.html | crawl-001 | en | refinedweb |
A Confusion of Stylesby John E. Simpson
January 28, 2004
Q: How do I style a custom element's content?
I want two elements (QUESTION and ANSWER) to be declared in an external DTD. The QUESTION element's data is to be displayed in red, and the ANSWER element's data is to be displayed in green. For this, an external stylesheet needs to be used. How do I include the DTD in my HTML document?
Here's the relevant code:
queans.dtd:
<?xml-stylesheet type="text/css"
href="clr.css"?>
<!ELEMENT QUESTION (#PCDATA)>
<!ELEMENT ANSWER (#PCDATA)>
clr.css:
<STYLE>
QUESTION {font-family:arial;font-size:20pt;color:#ff0000}
ANSWER {font-family:arial;font-size:20pt;color:#00aa00}
</STYLE>
sample.htm:
<!DOCTYPE SYSTEM "queans.dtd">
<HTML>
<BODY>
<QUESTION>What is your favorite web site?</QUESTION>
<ANSWER>My favorite web site is.</ANSWER>
</BODY>
</HTML>
A: As you've no doubt discovered, if you open this sample.htm in your
favorite browser, you'll see the content of the
QUESTION and
ANSWER elements all right but displayed in the browser's
default font (family, size, and color). You can fix this, but you'll have
to straighten out some major misunderstandings first.
To recap, you've got a DTD which defines a couple of elements; a
Cascading Style Sheet which contains a
STYLE element; and an
HTML document which (via the
DOCTYPE declaration) points to
the DTD and includes both elements defined there, plus a couple of others
(
HTML and
BODY). Among the numerous confusions
at work here:
- First, you don't need a DTD if all you want is to style an XML element's content when viewed in a browser. (I'll show you how to do this later.)
- Second, you really can't customize an HTML document by adding non-HTML elements (such as
QUESTIONand
ANSWER), using a DTD or by any other means. Browsers employ a variety of techniques to determine what sort of file or document they're asked to present to the user. (The simplest of these, also arguably the most common, is to look at the file's extension.) If a browser thinks it's reading an HTML document, it will display "known" HTML elements according to its default settings (for font, color, and so on) for that element -- unless overridden by a stylesheet. When it encounters an unknown element, however, all bets are off: it just displays the content in the same font, etc., used for plain old text (such as text contained in a
pelement). Thus, your sample.htm file isn't a true HTML document; it's not even an XHTML document, but a document employing a hybrid, HTML-like XML vocabulary.
- Third, if for some other reason you want to use a DTD, it needs to spell out everything which can be encountered in a document conforming to the DTD. If a document referencing the DTD includes HTML and BODY elements, then those elements must be declared in the DTD.
- Fourth, the association between a stylesheet and the content to be displayed is never made in a DTD, or anywhere else for that matter, but in the document where the content is found. Note that this is true for both XML documents -- including XHTML ones -- and garden-variety HTML ones.
- If you're working with a non-(X)HTML vocabulary, use an
xml-stylesheetprocessing instruction just like the one you've mistakenly placed in your DTD.
- If you're working with (X)HTML, place the reference to the external stylesheet in a
LINKelement -- again, in the document where the content resides (sample.htm, in this case). (If you're using XHTML-with-an-X, all element and attribute names must be lowercase. Thus
LINKbecomes
link,
BODYbecomes
body, and so on.)
- Finally, in a CSS stylesheet, there's never anything but the style specifications: you especially don't include anything that looks like XML or HTML (like the
STYLEelement you've placed in clr.css).
If contemplating all of this hasn't completely exhausted you, here are some alternative solutions to your problem.
Simple customized display of an XML document
With this approach, as I said, you don't need a DTD at all. Just point the XML document to the right stylesheet. Corrected versions of your documents would then look as follows:
clr.css:
QUESTION
{font-family:arial;font-size:20pt;color:#ff0000}
ANSWER {font-family:arial;font-size:20pt;color:#00aa00}
sample.xml:
<?xml-stylesheet type="text/css"
href="clr.css"?>
<HTML>
<BODY>
<QUESTION>What is your favorite web site?</QUESTION>
<ANSWER>My favorite web site is.</ANSWER>
</BODY>
</HTML>
Results when viewed in Mozilla:
(The Internet Explorer display is identical, except for the browser "chrome" of course.)
Simple display of an XHTML document
This is a little more complex, but still simple. The stylesheet remains
the same as above. In the document itself, as I said, change all uppercase
element and attribute names to lowercase. Furthermore, while it's not
absolutely essential in the almost-anything-goes world of browsers, you
should formally associate your document with the XHTML vocabulary in two
ways:
(1) use a DOCTYPE declaration which points to the XHTML DTD of
your choice, and (2) use an
xmlns attribute -- that is, a namespace
declaration -- to assert which of the document's elements are in the XHTML
namespace. (Typically, all elements in an "XHTML" document are in
the XHTML namespace, but this needn't strictly be the case.) And finally,
of course, you must add a
link element to connect your
document to
clr.css.
At this point,
sample.htm will now resemble the following
(assuming you want to use the XHTML "transitional" vocabulary), with the
most significant changes in boldface:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Transitional//EN"
"">
<html xmlns="">
<link rel="stylesheet" type="text/css" href="clr.css" />
<body>
<QUESTION>What is your favorite web site?</QUESTION>
<ANSWER>My favorite web site is.</ANSWER>
</body>
</html>
In this case, browser behavior diverges: Mozilla's display of the content matches that of the previous solution. Internet Explorer, however, displays the question and answer in the browser's default font (family, size, and color). The "problem," I think, is that IE still doesn't recognize your QUESTION and ANSWER elements as XHTML. (Understandably, I might add. This seems to be a more correct behavior than Mozilla's.)
The only way to fix this is to replace your QUESTION and ANSWER
elements with true XHTML elements, differentiating between them using
class attributes. Something like this:
<>
The implication of this change is that you must also change
clr.css's selectors; here's a general solution:
.QUESTION
{font-family:arial;font-size:20pt;color:#ff0000}
.ANSWER {font-family:arial;font-size:20pt;color:#00aa00}
(The dot preceding each selector associates that style with any element
which has a
class attribute with the indicated value.)
Now both browsers again behave identically.
Getting fancier...
I don't mean "fancier" in terms of the display; I mean it in terms of
how to attain your objective. This solution builds on the
previous one; it assumes that you truly do need those non-(X)HTML elements
in your document, and you want to use the uppercase element names. It
simplifies some things, such as
sample.htm itself (which
returns to something very like its original form). The complexity comes
from the addition of a second stylesheet -- this one in XSLT. Your
document would now look something like this:
<?xml-stylesheet type="text/xsl"
href="clr.xsl"?>
<HTML>
<BODY>
<QUESTION>What is your favorite web site?</QUESTION>
<ANSWER>My favorite web site is.</ANSWER>
</BODY>
</HTML>
The XSLT stylesheet in question would specify the transformation of
this simplified
sample.htm into a result tree which more or
less matches the version of
sample.htm in the previous
solution. Here's an XSLT stylesheet (just one of many approaches,
depending on how rigorous you need it to be) to accomplish this:
<?xml version="1.0"?>
<xsl:stylesheet version="1.0"
xmlns:
<xsl:output
<xsl:template
<html>
<link rel="stylesheet" type="text/css" href="clr.css" />
<xsl:apply-templates />
</html>
</xsl:template>
<xsl:template
<body><xsl:apply-templates /></body>
</xsl:template>
<xsl:template
<span class="{name()}"><xsl:apply-templates /></span>
</xsl:template>
</xsl:stylesheet>
(I'll leave readers to extract from this example those bits which most interest them.)
And how, you might wonder, do the browsers treat
sample.htm now? Sadly, they ignore -- at least under Windows
-- the result tree from this transformation; indeed they don't attend to
the xml-stylesheet declaration at all. I suspect this is because of that
operating system's stubborn reliance on filename extensions to determine
how to treat the document:
.htm (or
.html) means
the document is an (X)HTML document and damn the consequences.
Accordingly,
sample.htm displays exactly as it did way back
at the beginning of this answer, in default fonts and colors.
(Rather than shake your head in misery at this point, you might try
changing the extension to
.xml. Test it in both
browsers. You'll find something else to shake your head over.)
For what it's worth, if you run this most recent version of
sample.htm through a standalone XSLT engine like Saxon,
you'll see the result tree you need:
<>
If you save this result tree to a separate file, the browsers handle it just fine (and identically).
You may find this last alternative -- even though it's in some ways the most correct, requiring the least modification to your original source document -- a bit out of reach for now. Don't despair. Just keep asking questions and doing research.
Share your opinion in our forums.
(* You must be a member of XML.com to use this feature.)
Comment on this Article
- Wrong Namespace
2004-02-02 23:05:02 Christian Schmidt-Guetter [Reply]
Hallo,
why do you use a wrong namespace in the *correct* XHTML version?
Read
You can then read clearly:
"The root element of the document must be html
[and] ... must contain an xmlns declaration for the XHTML namespace. The namespace for XHTML is defined to be."
NOT "" as you are using in your example.
Greetings
Christian Schmidt-Guetter
- Wrong Namespace
2004-02-03 05:48:11 John Simpson [Reply]
Quite right -- thanks for catching that. The namespace URI I used in the column *was* at one time the correct one for Transitional XHTML (see, e.g.,). But that is no longer the case. The correct URI is indeed.
Thanks again!
JES | http://www.xml.com/pub/a/2004/01/28/qa.html | crawl-001 | en | refinedweb |
The QTableWidgetItem class provides an item for use with the QTableWidget class. More...
#include <QTableWidgetItem>
The QTableWidgetItem class provides an item for use with the QTableWidget class.
Table items are used to hold pieces of information for table widgets. Items usually contain text, icons, or checkboxes
The QTableWidgetItem class is a convenience class that replaces the QTableItem class in Qt 3. It provides an item for use with the QTableWidget class.
Items are usually constructed with a table widget as their parent then inserted at a particular position specified by row and column numbers:
QTableWidgetItem *newItem = new QTableWidgetItem(tr("%1").arg( pow(row, column+1))); tableWidget->setItem(row, column, newItem);
Each item can have its own background color which is set with the setBackgroundColor() function. The current background color can be found with backgroundColor(). The text label for each item can be rendered with its own font and text color. These are specified with the setFont() and setTextColor() functions, and read with font() and textColor().
Items can be made checkable by setting the appropriate flag value with the setFlags() function. The current state of the item's flags can be read with flags().
See also QTableWidget.
Constructs a table item of the specified type that does not belong to any table.
See also type().
Constructs a table item with the given text.
See also type().
Destroys the table item.
Returns the color used to render the item's background.
See also textColor() and setBackgroundColor().
Returns the checked state of the list item (see Qt::CheckState).
See also setCheckState() and flags().
Creates an exact copy of the item.
Returns the item's data for the given role.
See also setData().
Returns the flags used to describe the item. These determine whether the item can be checked, edited, and selected.
See also setFlags().
Returns the font used to render the item's text.
See also setFont().
Returns the item's icon.
See also setIcon().
Reads the item from stream in.
See also write().
Sets the item's background color to the specified color.
See also backgroundColor() and setTextColor().
Sets the check state of the table item to be state.
See also checkState().
Sets the item's data for the given role to the specified value.
See also data().
Sets the flags for the item to the given flags. These determine whether the item can be selected or modified.
See also flags().
Sets the font used to display the item's text to the given font.
See also font(), setText(), and setTextColor().
Sets the item's icon to the icon specified.
See also icon() and setText().
Sets the item's status tip to the string specified by statusTip.
See also statusTip(), setToolTip(), and setWhatsThis().
Sets the item's text to the text specified.
See also text(), setFont(), and setTextColor().
Sets the text alignment for the item's text to the alignment specified (see Qt::AlignmentFlag).
See also textAlignment().
Sets the color used to display the item's text to the given color.
See also textColor(), setFont(), and setText().
Sets the item's tooltip to the string specified by toolTip.
See also toolTip(), setStatusTip(), and setWhatsThis().
Sets the item's "What's This?" help to the string specified by whatsThis.
See also whatsThis(), setStatusTip(), and setToolTip().
Returns the item's status tip.
See also setStatusTip().
Returns the table widget that contains the item.
Returns the item's text.
See also setText().
Returns the text alignment for the item's text (see Qt::AlignmentFlag).
See also setTextAlignment().
Returns the color used to render the item's text.
See also backgroundColor() and setTextColor().
Returns the item's tooltip.
See also setToolTip().
Returns the type passed to the QTableWidgetItem constructor.
Returns the item's "What's This?" help.
See also setWhatsThis().
Writes the item to stream out.
See also read().
Returns true if the item is less than the other item; otherwise returns false.
Assigns other's data and flags to this item. Note that type() and tableWidget() are not copied.
This function is useful when reimplementing clone().
See also data() and flags().
The default type for table widget items.
See also UserType and type().
The minimum value for custom types. Values below UserType are reserved by Qt.
See also Type and type().
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Writes the table widget item item to stream out.
This operator uses QTableWidgetItem::write().
See also Format of the QDataStream Operators.
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Reads a table widget item from stream in into item.
This operator uses QTableWidgetItem::read().
See also Format of the QDataStream Operators. | http://doc.trolltech.com/4.0/qtablewidgetitem.html | crawl-001 | en | refinedweb |
This demo was contributed by Justin Gilbert, game programming guru at Multimedia Games and Vanguard Games.
This applet was written in Python and compiled with Jython 2.2.1 for using Java 1.5.0.13. Make sure you have a recent version of the JVM installed. The file is 935 KB big, mostly code but including a few maps. See the A* code below.
Instructions
Double click on the grid to set the start point (green) and once again to set the end point (red).
Click on Step to perform a single operation of the planner, and Run to visualize the algorithm.
Select a map and click on Load to change the weights of the grid.
PathPlannerAStar
This class contains all the memory and logic needed to plan a solution given a start and end point. It assumes that we are using a two deimensional grid. Each cell of the grid can be represented by node containing the location and costs. It maintains an open and closed list nodes used in the A* algorithm. The open list is simply a python list where the closed list has been implemented as a hash table.
class PathPlannerAStar: m_nStartX = 0 m_nStartY = 0 m_nGoalX = 0 m_nGoalY = 0 m_pCurrentNode = None m_bFoundGoal = False m_bIsDone = False m_bDiagonalPenalty = True m_bHeuristicWeight = 1.1 m_nClosedHashTableSize = 199 m_dictClosedHashTable = dict() m_lsOpenList = list() m_lsSolution = list() m_lsSuccessorPoints = [(-1,-1),(-1,0),(-1,1),(0,-1),\ (0,1),(1,-1),(1,0),(1,1)]
initWeights
Simple helper function to initialize the weights for each node in the grid. It defaults all values to a cost of 1.
def initWeights(self): self.weightData = [MAP_WEIGHT1 for x in \ range(self.gridWidth * self.gridHeight)]
PlanPath
Start the planner off with a start and end location. Returns a boolean as to whether or not the goal was a valid cell.
def PlanPath(self, nStart, nGoal):
First thing we want to do is make sure the planner can find a solution. If the goal is a wall, exit the function with a failing return value.
if self.weightData[nGoal] == MAP_DATA_WALL: print "Goal is invalid : blocked cell" return False
Now convert the 1D index numbers into 2D coordinates for both the start and goal.
self.m_nStartX = nStart % self.gridWidth self.m_nStartY = nStart / self.gridWidth self.m_nGoalX = nGoal % self.gridWidth self.m_nGoalY = nGoal / self.gridWidth
Reset the current node as well as the flags for finding a solution.
self.m_pCurrentNode = None self.m_bFoundGoal = False self.m_bIsDone = False
Create the first node to insert into the open list. For every node we insert into the open list we have to compute its heuristic cost. This is simply a best guess at how costly it would be to travel from the node to the goal. Here we are using the distance between the two points.
root = PlannerNode(self.m_nStartX,self.m_nStartY,None) root.ComputeHeuristicCost(self.m_nGoalX,self.m_nGoalY) self.SmartInsert(self.m_lsOpenList,root) return True
Run
Executes one step of the planner. If the open list is empty the planner is marked as done and no further work will be done.
def Run(self): tempNode = None
Here we make sure there is still work to be done. If there is we pop the top node off the open list, assign our current node to it, and insert it into the closed list.
if len(self.m_lsOpenList)>0: self.m_pCurrentNode = self.m_lsOpenList.pop() self.HashTableInsert(self.m_pCurrentNode)
Now compare the current node to the goal. If they match then we have found a path to the goal. Since each node has a link to its parent, we can easily create a solution set by traversing back to the start.
if (self.m_pCurrentNode.x == self.m_nGoalX \ and self.m_pCurrentNode.y == self.m_nGoalY): self.m_bFoundGoal = True self.m_bIsDone = True tempNode = self.m_pCurrentNode while(tempNode is not None): self.m_lsSolution.append(tempNode) tempNode = tempNode.parent
If the current node is not the goal then insert all of the node’s qualifying neighbors.
self.InsertSuccessors(self.m_pCurrentNode)
Nothing left in the open list, which means no more work can be done. It is possible to finish without finding a solution.
else: self.m_bIsDone = True
CleanUp
Clear out the lists and resets the current node.
def CleanUp(self): self.m_dictClosedHashTable.clear() self.m_lsOpenList = list() self.m_lsSolution = list() self.m_pCurrentNode = None
SmartInsert
Helper function. Performs insertion sort to keep nodes into the correct location.
def SmartInsert(self, lsList, node): insertPosition = 0 i = 0 k = len(lsList) - 1 while i <= k: l = (i + k) / 2 i1 = node.Compare(lsList[l]) if i1 < 0: i = l + 1 insertPosition = -(i + 1) elif i1 > 0: k = l - 1 insertPosition = -(i + 1) else: insertPosition = l break if insertPosition < 0: insertPosition= -insertPosition - 1 lsList.insert(insertPosition,node)
HashTableInsert
Inserts a node into the closed list. A hashing function is used to separate the nodes into buckets for faster lookup later on.
def HashTableInsert(self, node): nIndex = COMPUTE_HASH_CODE(node.x,node.y) \ % self.m_nClosedHashTableSize if self.m_dictClosedHashTable.has_key(nIndex): node.hashTableNext = self.m_dictClosedHashTable[nIndex] else: node.hashTableNext = None self.m_dictClosedHashTable[nIndex] = node
HashTableFind
Returns a node given it’s x and y coordinate.
def HashTableFind(self, x, y): if self.m_dictClosedHashTable is None: return None nIndex = COMPUTE_HASH_CODE(x,y) % self.m_nClosedHashTableSize node = self.m_dictClosedHashTable[nIndex] while node is not None: if node.x == x and node.y == y: print "Found node!!!" return node node = node.hashTableNext return None
HashTableRemove
Removes a node from the closed list given it’s x and y coordinate.
def HashTableRemove(self, x, y): nIndex = COMPUTE_HASH_CODE(x,y) % self.m_nClosedHashTableSize node = self.m_dictClosedHashTable[nIndex] prevNode = None while node is not None: if node.x == x and node.y == y: if prevNode is not None: prevNode.hashTableNext = node.hashTableNext else: self.m_dictClosedHashTable[nIndex] = node.hashTableNext node.hashTableNext = None return node prevNode = node node = node.hashTableNext return None
InsertSuccessors
This is where most of the work is done. It will insert any new valid node into the open list. The algorithm breaks down like this:
- For each neighbor:
- Make sure it is a valid cell (within map bounds, not a wall). If not, skip it.
- Calculate the final cost of the node.
- Check to see if it is in the closed list
- If it is:
- If the new cost is less than the found node’s cost, remove the found node from the closed list and insert it into the open list with the new cost. If not, skip it.
- Otherwise:
- Check to see if it is in the open list
- If it is:
- If the new cost is less than the found node’s cost, remove the found node from the open list and reinsert it with the new cost. If not, skip it.
- Otherwise: Insert a new node with the new cost into the open list.
def InsertSuccessors(self, pn): newNode = None nNewX = 0 nNewY = 0 fNewCost = 0.0 bSkip = False for x,y in self.m_lsSuccessorPoints: nNewX = x + pn.x nNewY = y + pn.y bSkip = False newNode = None
While calculating each successor point, do a check for blocked cells.
if nNewX >=0 and nNewY>=0 and nNewX < self.gridWidth and nNewY < self.gridHeight: if self.weightData[nNewY * self.gridWidth + nNewX] == MAP_WEIGHT5: continue
Calculate the new cost. This is simply the cost of the parent node plus the weight for the given cell. There can also be a penalty added when traveling diagonally.
fNewCost = pn.givenCost if x!=0 and y!=0 and self.m_bDiagonalPenalty == 1: fNewCost += MAP_WEIGHTS[self.weightData[nNewY * self.gridWidth + nNewX]] * 1.4142 else: fNewCost += MAP_WEIGHTS[self.weightData[nNewY * self.gridWidth + nNewX]]
Now try to find the node in the closed list. If we find it, compare it’s given cost to the new cost. If the new node’s cost is cheaper, we want to remove it from the closed list and insert it with the new cost into the open list. In either case, if we found it in the closed list we need to skip ahead to the next neighbor.
newNode = self.HashTableFind(nNewX,nNewY) if newNode is not None: if fNewCost < newNode.givenCost: self.HashTableRemove(nNewX,nNewY) newNode.parent = pn newNode.givenCost = fNewCost newNode.finalCost = newNode.givenCost + newNode.heuristicCost * self.m_bHeuristicWeight self.SmartInsert(self.m_lsOpenList,newNode) continue
Next, if it wasn’t in the closed list, try to find it in the open list. If we find it, compare it’s given cost to the new cost. If the new node’s cost is cheaper, reinsert the node with the new cost. This kills off paths that we can already determine to be more costly. Again, if we find it in the open list we want to skip ahead to the next neighbor.
nSize = len(self.m_lsOpenList) for newNode in self.m_lsOpenList: if newNode.x == nNewX and newNode.y == nNewY: bSkip = True if fNewCost < newNode.givenCost: self.m_lsOpenList.remove(newNode) newNode.parent = pn newNode.givenCost = fNewCost newNode.finalCost = newNode.givenCost + newNode.heuristicCost * self.m_bHeuristicWeight self.SmartInsert(self.m_lsOpenList,newNode) break if bSkip: continue
It wasn’t in the open or closed list so insert a new node into the open list.
newNode = PlannerNode(nNewX,nNewY,pn) newNode.ComputeHeuristicCost(self.m_nGoalX,self.m_nGoalY) newNode.givenCost = fNewCost newNode.finalCost = newNode.givenCost + newNode.heuristicCost * self.m_bHeuristicWeight self.SmartInsert(self.m_lsOpenList,newNode)
7 Comments ↓
This seems to be a solid implementation of the A* algorithm. However, I am not quite sure what this article is mainly about, the implementation of an algorithm in a certain programming language, or the A* algorithm.This gives a more accurate heuristic that matches the cost function. It is still admissible (i.e., always underestimates the cost) and as a bonus, faster to calculate than the euclidean distance with its squares and roots.
If this was meant as an article about the A* algorithm, I would have given more focus to other aspects than those mentioned in this article.
First thing to mention here is the heuristic used. The quality, speed and accuracy of a design of the A* algorithm rises and falls with the choice of the heuristic. I would even say it is the single most decisive factor of this algorithm! And the heuristic is only mentioned in half a sentence here!
Indeed more care should be taken when choosing the heuristic. In this case, where there are 8 possibilities to move from one square (straight or diagonal) to the next, the euclidean distance is suboptimal as a heuristic. Of course, it underestimates the cost and is therefore an admissible heuristic, but it does not do a good job at it: the heuristic does not match the cost function, which means that although still guaranteeing optimality, the algorithm can (and most likely will) take longer to run.
A better choice would be to take the shortest direct distance which can be achieved by an actual path:
The fact that the two mentioned heuristics are admissible (if all the weights of the map are >= 1, that is) allows us to simplify the used algorithm:
As we always only pick the best node from the stack and put it in the closed list, the case that we hit this same node again later with a lower cost cannot happen! Of course, dead code doesn't really do any harm here, but as this aspires to be an example of a correct and well implemented A* algorithm, dead code is not a good thing. And if we'd have an additional array of booleans that stores if a cell is closed, we could save even more processing power here.
However, it is possible to justify this query. If we want to be able to modify the heuristic to be non-admissible, e.g. in allowing weights < 1, for example "roads" that allow faster movement than average terrain, the query mentioned above becomes necessary. When using non-admissible heuristics the result is not guaranteed to be optimal anymore, but A* becomes faster, which is a valid choice in a game.
So, summing up, I would have given the heuristic some more focus, and maybe a bit less focus on the data structures used.
Speaking of which, of course the choice of a good data structure is very important, so maybe a sentence or two on why a hash and a sorted list were chosen (and not heaps or trees or indexed arrays) would have been nice. Also, operations on the open list are more critical than operations on the closed list if we use an admissible heuristic, as we then only need membership checks for the closed list. In the article there is a lot on the hash for the closed list, but only little on the datastructure of the open list.
Sorry for the long and a bit critical comment, but as I've read a bit about the A* algorithm already (I recommend Amit's A* pages, he also mentioned the heuristic used above), I felt I should give some remarks here.
Edit: I just noticed the m_bHeuristicWeight = 1.1, which is a factor the heuristic is multiplied with. This is not explained in the article, but i assume it was introduced to speed up the algorithm, and (i guess) can be understood as an expected average weight of the path. This weight makes the heuristic non-admissible, and justifies the query of the closed list.
First off I want to thank you for the long post. This is the first public technical document I have had the chance to write and any criticisms are more than welcome.
After reading through again there are a few areas, as you mentioned, that could have been explained in more detail. You are completely right about the heuristic. The only reason to even implement your planners using A* is to take advantage of the heuristic and it should have been explained a little better. To be completely honest the m_bHeuristicWeight variable was simply to make it act more like a best first planner. So, yes, it was added to speed up the algorithm.
I am by no means an expert on the subject and I am glad you took the time to comment. Only good can come from constructive criticism. It moves us forward and allows everyone to see where things can be made better.
RobinB, this blog doesn't only contain articles with a particular purpose but cool things relating to game AI generally. And Justin's applet definitely fits in that category! :-)
If anything, it's interesting to play around with and watch it work! No 1000 word treatment of A* could do justice to all the subtleties of the algorithm...
Would you prefer just a demo next time, or was the code welcome nevertheless?
Alex
Yeah, I probably sounded a bit too harsh here, I just wasnt sure what the article was about. That's not a mistake of the article or the blog, but of my trying-to-categorize self :-)
I understood it as an article showing how A* should work, but I should have seen it as a tech demo from the beginning, I guess. Sorry Justin for being a bit overly critical here!
And just to clarify, I like to see the code! I think thats at least as important as seeing the demo. I can see the AI used in a game while I play everyday, but I find it way more interesting to see how it works internally (what makes it tick, as Sylar would say :-)).
I liked the neat demo :) it'd have been nice to see the colour of what the paths are going over (gradients on the colours), and I have yet to suss all the explanation since I've been so strapped for time, but will bookmark it for later!
Looks like a good overview of how it works with the code, so it'll be no doubt useful to me.
Andrew,
Thanks for your feedback! That's a nice feature suggestion.
RobinB,
This blog has almost hit 1,400 subscribers now too... so there's a certain wisdom of the crowds going on here. If I (or guess bloggers) get anything wrong or miss anything out, chances are it'll be caught!
So in the end it works out great, thanks to contributions like yours :-)
Alex
P.S. If you want to write an article about any of the issues you discussed, you know where to find me!
Hey, I read through it today, will be useful for sure. I might dig out the JAR off your site to look at it and dissect a bit too (would have been great to have a way to upload a gif into it for the terrain or whatnot :) ).
And I also, since it looked cool, used one of the tests I ran in it as my darkened background on some business cards I finally sorted today (21 days until I can see them however :( ). Would have been nicer with gradients but still looks neat, better then it being just black anyway.
You can also reply to this thread in the forums. | http://aigamedev.com/demo/astar-pathfinding | crawl-001 | en | refinedweb |
WiFi and access points
- David Lindgren last edited by
As a beginner I try to connect to wifi using my phone as possible access point.
I found the below code:
from network import WLAN wlan=WLAN(mode=WLAN.STA_AP) nets=wlan.scan() print(nets) for net in nets: print (net.ssid)
and thought it would provide all networks, i.e my phone and my router. But, only my router shows up and I get a message that AP
Traceback (most recent call last):
File "boot.py", line 16, in <module>
ValueError: AP SSID not given
WMAC: 10521C65D52C
The Wifi SSD is in a file pybytes_config.json. There is another settingsfile but I do not know where.
But, shouldn't the SCAN method provide all networks without any SSID settings? | https://forum.pycom.io/topic/7166/wifi-and-access-points | CC-MAIN-2022-05 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.