text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
SI7021 I2C sensor
I'm trying to get the SI7021 sensor working, which is connected on the I2C bus. Connectivity basically works and I can also write and read bytes. But I strugle with converting the example Arduino code to MicroPython. I've also found another example using the smbus Python library, but I also couldn't find out how to convert this.
Here is what I've done so far:
from machine import I2C import time ADDR = 0x40 SI7021_MEASTEMP_NOHOLD_CMD = 0xF3 SI7021_MEASRH_NOHOLD_CMD = 0xF5 # configure the I2C bus i2c = I2C(0, I2C.MASTER, baudrate=100000) # Humidity i2c.writeto(ADDR, bytes([SI7021_MEASRH_NOHOLD_CMD])) time.sleep(0.3) hum0 = i2c.readfrom(ADDR, 1) hum1 = i2c.readfrom(ADDR, 1) print(hum0) print(hum1) #humidity = ((hum0 * 256 + hum1) * 125 / 65536.0) - 6 #print(humidity) # Temperature i2c.writeto(ADDR, bytes([SI7021_MEASTEMP_NOHOLD_CMD])) time.sleep(0.3) temp0 = i2c.readfrom(ADDR, 1) temp1 = i2c.readfrom(ADDR, 1) print(temp0) print(temp1) #cTemp = ((temp0 * 256 + temp1) * 175.72 / 65536.0) - 46.85 #fTemp = cTemp * 1.8 + 32 #print(cTemp)
Output:
b'F' b'F' b'h' b'h'
Currently I don't understand how I can read and convert the data correctly. Sending the commands works and I also get some sort of data back when calling the
i2c.readfromfunction, but then I'm lost. F.e. I don't know how many bytes I must read and then how to convert it to an actual useable value.
Any help is much appreciated!
Cheers,
Tobias
@crankshaft said in SI7021 I2C sensor:
Try This;
WOW! You're great, works like a charm... Thank you very very much!
- crankshaft last edited by
Try This; | https://forum.pycom.io/topic/648/si7021-i2c-sensor | CC-MAIN-2020-40 | refinedweb | 272 | 62.34 |
29,141
This item is only available as the following downloads:
( DOC )
( PDF )
Full Text
Alister Hughes
P o BoI 65
St.George
Grenada
OShTINDIEB
THE GRENADA NAWaLETTER
or. The Week, hadina ftotober 29th 1977.
SALARIES EVIaIO REPORT BOON
The report of the Commission appointed to review ,the salaria of all
Government monthly paid employees is expected to be submitted to
Government shortly. In an exclusive interview in Grenada with
NEWSLETTER on Tuesday (25th), this was disclosed by Mr Eldon Maturin,
Alternate Chairman of the Commission and Tax Administration Advisor
attached to the CARICOM Secretariat.
The matter of the revision of salaries of Government monthly paid
employees dates back to November 1975. At that time, Mr George
Hosten, Minister of Finance, promised the Civil Service Aseociation
(CSA), the Grenada Union of Teachers (GUT) and the Technical & Allied
Workers Union (TAWJ) (the three trade unions representing Government
monthly Baid workers), that he would deal with the revision of
salaries "as soon as possible".
Commencing in January 1976, increasing pressure was put on Government
to commence negotiations and, when Government failed to act, strike
action was decided upon at a meeting of the unions on February 14th.
Two days later, however, Government advised the unions that a
negotiating team had been appointed and discussions began on February
25th.
The demand made by the unions was divided into three sections. For
the salary range up to ECB4,884.00 per annum, an increase of 65% was
asked. The salary range above BCD4,881.00 and under EC#8,628.00
should be increased by 50 and the salary range above EC#8,628.00
by 409, these increases to be effective from January stt 1974.
Counter .Offer
Government counter offered with increases spread over three years
commencing January 1st 1975. For the lowest category specified
by the unions,.121 was offered for the first year, 12j% for the
second and 15 for the third. In the middle income group, the
offer was 10%, 10o and 15%, while the top income bracket was offered
7-%, 7J% and 10%.
"he unions rejected Government's offer and, after several meetings,
Alister Hughes
THE GREHADL NEWSLETTER P Week Ending 29*10.77
paga ..
Government proposed that the whole matter of salaries revision
should be referred to a Salaries Commission. 'here were further
discussions and the unions finally agreed to the appointment of the
Commission when Government agreed to pay an interim salary increase.
.Ahis increase was at the rate of 25%6 for the lowest salary earners,
15% for the middle bracket and 10% for the top bracket. It was
agreed that these interim increases would take effect from
January 1st 1975, and the findings of the Salaries Commission
Would be implemented from January 1st 1977.
The salaries of Government monthly paid employees first reflected
the interim increases at the end of April 1976, and it was
reported that these increases amounted to over EC1200,000.00
per month. Back pay due over the 15 months from January
1975 to March 1976 was paid in two installments. The first,
covering the period January to December 1975, was paid in ,
June 1976 and is reported to have amounted to about E0$2 million.
The balance was paid four months later.
On October 26th 1976, CSA, GUT and TAWU wrote the Minister of
Finance, Mr George Hosten, expressing concern over the delay
in the commencement of sittings of the Salaries Revision
Commission which had been set up by Government. This
Commission was headed by Mr John Tyndall of the CARICOM Secretariat,
the other members being Senator James Manswell, Executive
Secretary of the Trinidad & Tobago Publio Services Association,
and Mr Edwy Talma, former Deputy Prime Minister of Barbados.
In their letter to Mr Hosten, the unions demanded that the
Commission commence sittings no later than November 12th 1976.
unwise
However, the Commission did not begin its sittings until 1nth
December 1976 and, at that time, Mr Eldon Mat rin, the
Alternate Chairman of the Commission, said it would be unwise
to promise the Commission's Report in time for implementation
on April 1st 1977. With the volume of work which had to
be done, Mr Maturin thought July 1st 1977 was a more realistic
date. The 'volume of work' referred to by the Alternate
Chairman was under the teras of reference of the Commission,
*'** n~brmisaion of recommendations for the revision of salaries of
Alister H ghea.
THL GRE~ADA NEWSLETTER Week Ending 29.10.77
Pae 31
Government monthly paid employees and recommendations for the
restructuring of the Civil Service.
Speaking to NEWSLETTER on Tuesday (25th), Mr Maturin said the
Commission's Report is "more or less complete". "What I am
doing at the moment", he said, "is straightening out some figures
in the hope that, when my colleagues arrive, we can all agree and
then hand in the Report."
With reference to the restructuring of the Civil Service, the
Alternate Chairman said no attempt had been made to undertake a
complete restructuring. "What we have done", he said, "is to
recommend the removal of anomalies and the modernisation of the
Civil Service."
Mr Maturin, who is now a guest at the Holiday Inn, said this is
his last visit to Grenada in connection with the Salaries Revision
Commission. "The finishing touches are now being put to the
Report", he said, "and it will definitely be handed in to
Government before I leave."
(28th)
NEIWSLETTE checked with Mr Laturin today/and learned that Senator
James ian-ell is expected to arrive in Grenada tomorrow.
Mr Maturin said the third member of the Commission, Mr Talma, is
expected at a later date, but he did not know exeotly when.
(849 words)
GO-SLOW NFAR8 DR (9)
As this copy is being prepared for press (am 28th), the signs are
that the industrial action taken by the Commercial & Industrial
Workers Union (CIWU) against the firm of Jonas Browne & Hubbard
Ltd may soon be called off.
This industrial action took the form of a go-slow and it has been
in effect since October 21st and affected all departments of the
Company including two supermarkets, shipping agency, hardware and
lumber departments and dress shop. After negotiations became
deadlocked, CIWU took this action to support its claim for a cost
of living allowance.
(continued)
Alister Hughes
THE GRENADA NEWSLETTER Week Ending 29.10.77
This dispute appeared to be about to be resolved late last week
when both the Labour Commissioner, Mr Robert Robinson, and the
Company's negotiating team gained the impression that the union
had accepted an offer put forward by the Company at a meeting
held under the chairmanship of Mr Robinson.
This offer was that salary grades up to EC$250.00 per month should
have a cost of living allowance of 15, those between EC$251.00
and EC#550.00 would have an allowance of 10S, and those above
EC$350.00, 7j6. These allowances would be paid with effect
from let October 1976.
According to both the Labour Commissioner and Mr Fred Toppin,
Managing Director of Jonas Browne & Hubbard Ltd, this offer
was accepted by CIWU's negotiating team and the only matter in
dispute was the date at which a further 2s would be added to
,he allowance in the three salary grades. The Company was
willing to pay this additional percentage from November 1st
1977 while the union wanted it introduced from October ist 1977.
In an interview on Monday (24th), Mr Eric Pierre, President of
CIWU, told NEWSLETTER that his negotiating tean had never
accepted this offer from the Company and he had informed both
the Company and the Labour Commissioner of this. Mr Pierre
said the Company's offer for the lowest and the highest salary
grades was acceptable, but in the middle grade, where the Company
had offered 10%, the union insisted on 12%. Further, the
union was not prepared to accept the additional 21*A from lst
November; it remained firm on its stand that the additional
percentage should be paid from October let.
The go-slow continued throughout this week and, as a result of
the efforts of the Labour Commissioner, the following compromise
was agreed to yesterday (27th) by both CIWU and the Company.
Effective October 1st 1976, salary grades up to EC*250.00 a
month will receive a cost of living allowance of 15%, and the
allowance for salary grades from EC0251.00 to EC4350.00 will be
at the fate of 12JY. A new salary grade of B04351.00 to
?xs600o00 has been added, and the allowance for this grade will
Alister Hughes
THE GRENADA NEWSLETTER Week Ending 29.10.77
Page 5
be at 71, while those over EC#60Q.00 will receive an allowance
at the rate of 5fb. An additional increase of 2/o will be paid
to all grades with effect from November let 1977.
The dispute has not yet come to an end, however, because three
employees were dismissed since the go-slow began and CIWJ demands
that they be reinstated before the industrial action is called
off. Mr tric Pierre, President of OIWU, told NEWbaLTTiE
yesterday (27th) that the union's stand is that the workers must
be reinstated first and then justification for the dismissals can
be discussed.
Contacted today (28th), Mr 'red Toppin, Manging Director of the
Company, said he was meeting with his Directors today abd that
the Company's position will be conveyed to the Labour Commissioner.
(589 words)
SENATOR KNIGHT DENIES CHARGES
In an interview on Trinidad & Tobago Television (TTT), seen in
Grenada on Tuesday (25th), Senator Derek Knight, Minister without
Portfolio in the Grenada Government, denied that officers of the
Grenada Defence Force have been sent to Chile for training.
"This is absolutely untrue", he said.
Senator Knight was replying to charges made by Mr Maurice Bishop,
Leader of the Opposition in the Grenada House of Representatives,
in an interview on TTT, and Senator Knight said that, contrary to
what Mr Bishop has said, there is no widespread concern in
Grenada over the Grenada Government's connection with the Chile
Government. Senator Knight said that Chile was merely one of
several countries with which Grenada is arranging aid and the
arrangements with Chile happened to be the first to be finalized.
Reference wae made to a shipment of "unspecified cargo" which
arrived in Grenada on a Chilean Army plane on October 2nd.
Senator Knight said that if the Grenada Government wished to
import anything it didn't have to hide to do so, and he hinted
that what had been received was a shipment of life-jackets from
Chile for Grenadian fishermen.
(continued)
Alister Hughes
THE GRENADA NEWSLETTER Week Ending 29.10.77
Page 6
Senator Knight was also asked about the charge that the Grenada
Government is harbouring a criminal wanted by the United States
Government. The Senator said he Knew the person to whom
reference was being made, but he said he had seen no court
authority to prove that this person is wanted in the USA.
Senator Knight said also that this matter is being handled at
diplomatic level.
In this interview on TTT, Senator Knight was also asked about
reports of violations of human rights in Grenada. Senator
Knight said there were no violations of human rights in Grenada
and he said the records of the Grenada Courts prove this.
The Senator said that people made allegations about violations
of human rights but, when it came to providing proof, they were
unable to do so. In this connection he referred to the recent
Election Petition brought by a member of one of the Opposition
Parties against a member of the political party to which Senator
Knight belongs. The Senator said that the charge of bribery
had been made initially but that no evidence had been brought to
prove this and the Election Petition had failed.
Senator Knight said there is not always a full appreciation of
what is meant by 'human rights', and he felt that this term
included the provision of shelter, clothing and hospital
facilities.
In an exclusive interviewed with Mr Maurice Bishop today (27th),
the Leader of the Opposition expressed surprise to NEWSLETTER
that Senator Knight had denied that officers of the Grenada
Defence Force have been sent to Chile for training. "Even
the Commander of the Defence Force has not denied this", he
said, "and if Senator Knight does not know this fact, then he
must be the only person who is ignorant of it."
On the subject of cargo received by the Chilean Army plane,
Mr Bishop said photographs had been taken of the shipment and
the cases had been clearly marked 'Medical Supplies'. Checks
with the Medical Authorities disclosed that no supplies had been
? ived, said Mr Bishop, and he thought it strange that, while
Alister Hughes
THE GRENADA NEWSLETTER Week .bding 29,10.77
Page 7
nobody in authority could say what was in the boxes, Senator
Knight was implying that it was life-jackets.
The Leader of the Opposition said the person being harbouredd
by the Grenada Government" is Eugene Zeek who now goes under
the name of John Clancy. "zeek is wanted by the FBI under a
charge of Fraud by Wire", said Mr BAisop, "and Senator Khight
has seen the 'wantI notice' which the FBI put out on March
5th 1974. Senator Knight also knows that Attorney General
Desmond Christian lost his job because he wished to take action
against Zeek, and it would be interesting to know how Senator
Knight would explain away the unconstitutional interference
of his Government with the duties of Grenada's highest ranking
Law Officer."
Concerning the violations of human rights in Grenada, Mr Bishop
said these had been too well documented by the Duffua Commissioners
and otherwise to need further comment. He described Senator
Knight's reference to the Election Petition as "a red herring."
"After all", said Mr Bishop, "there was nothing in the pleadings
of the Election Petition which had anything to do with human rights."
Reverting to the matter of Grenada's connections with Chile,
Mr Bishop said it was significant that Senator Knight had not
denied that the Grenada Government is forging links with the
Chilean Government of General Pinochet. "In the face of
the world-wide condemnation of the oppressively criminal
nature of the Pinochet Government", said the Leader of the
Opposition, "what cdnoerne right thinking people is that the
close, concrete and military links of the Gairy Government with
the Pinochet regime must hold sinister consequences for Grenada
and the Caribbean."
(807 words)
EXCHaNGQ CONTROL ALOWLNCEu INORbAED
According to a notice dated October 17th and appearing in the
Covarnment Gazette of October 21st, allowances made for currency
for travel and for gifts have been increased with immediate
Sct. (continued)
Alister Hughes
THE GRENADA NEWSLE'i'TR Week Ending 29.10.77
For business travel, the allowanoe,which used to be EO44,000.00
per person per year,has been increased to EC#7,00OO00 or the .0
foreign exchange equivalent. The allowance for vacation
travel is up from EC$1,500.00 to EC03,000.00 or the foreign
exchange equivalent per person per year, and the allowance for
gifts has been doubled from EC*250.00 to EC$500.00 or the
foreign exchange equivalent per person per year.
(105 word)
GRENADA RJPRESE TED AT GEX E lSING
The Grenada Employers Federation (GEF) was represented by its
President, Mrs Angela Smith, at the 12th Interim Meeting of the
Caribbean Enployers Confederation (CEC) which took place in
Barbados from 19th to 21st October.
Presenting a Report to the meeting on behalf of her organisation,
Mrs Smith gave details of the industrial dispute between the
Grenada Shipping Agents and the seamen & Waterfront Workers
Union over the award of a cost of living allowance.
dMs Smith also advised the meeting that there is now draft
legislation for the establishment of a Port Authority in Grenada.
The GEF expressed the view that a Port Authority will have an
effect on industrial relationships on the docks, but she said
the details of the legislation will have to be published before
the extent of the effect can be known.
In an exclusive interview with NWTdLETTER today (27th), Mrs Smith
said she had also reported to CEC that, in Grenada, there was an
increase in activity in the industrial field.
Referring to discussions at the CEC meeting, Mrs Smith said
members had deplored the lack of some Governments' interest
in consulting employer organizations and trade unions when
considering labour legislation. She said the opinion was
expressed that this action runs contrary to the International
Labour Organisation's (ILO) philosophy of tripartism which the
CEC supports.
"The Grenada Government has now accepted in principle that
(continued)
Alister Hughes
THE GRENADA NEWSLETTER Week Ending 29.10*77
Page 0.
HENDON POLICEMAN VISITS
A policeman fom the Hendon Police Training School in England
is now in Grenada on a travel bursary given by the Commission
for Racial Equality. This policeman is Inspector Geoff b
Badworth and the travel bursary is one given to people who deal
with immigrants and who could benefit by having a personal
understanding of the Westindian way of life.
Inspector Budworth, who commands a section of the Bandom Training
School, spent 17 days in Trinidad and arrived in Grenada on
Monday (24th). He leaves for St.Vincent on Tuesday (let).
(88 words)
#vfvvHt t#I*nmrffif5-iri#^
LATE NEWS '.~:
There has been no solution today to the dispute between the
Commercial & Industrial Workers Union (CIWU) and Messrs Jonas
Crown & Hubbard Ltd over the reinstatement of workers dismissed
during the go-slow.
Interviewed by telephone this morning, Mr fred Toppin, Managing
Director of the Company, said there had been no agreement on the
question of reinstatement of workers. "The Company's position
is that this is a separate issue", he said, "and the main issue
must be settled first."
Mr Toppin said that, if the Company were to enter into discussion
with the union now on the question of reinstatement of workers
and while the go-slow is still in effect, the Company would be
negotiating under duress. The Managing Director said the
Company's position had been set out in a letter to the union
today (28th), and a copy of that letter has been sent to the
Labour Commissioner.
At a General Meeting of the Seamen & Waterfront Workers Union
(SWWU), the position taken by Jonas Browne & Hubbard Ltd was
condemned, and a letter calling for the immediate reinstatement
of the dismissed workers has been sent to the Company. A
spokesman for SWWU told NMSLITTER that the Executive of SWWU
will meet this afternoon to discuss the matter further.
(continued)
Alister Hughes
THE GRE&ADA NEWiSETTER Week Ending 29.10.77
Pmae 9
Grenada should become a member of ILO", said Mrs Smith, "and we
look forward to this as it will ensure support for tripartiam in-
Grenada."
(255 words)
DhATH PENALTY SEMINAR
Grenada will be represented at a eOaibbean Semina on the
Abolition of thb Death Penalty, which seminar will be held in
Trinidad ont, and .8th November.
This seminar is sponsored by the Caribbean Human Rights and Legal
Aid Company in conjunction with Amnesty International and the
National Committee for the Abolition of the Death Penalty, and
delegates will meet at Pax Guest nouse at Mount St.Benedict in
Trinidad. This seminar will report to a Global Conference on
%he same subject to be held in Stockholm on December 10th and 11th
1977.
Delegates representing Grenada at the seminar in Trinidad will
be Messrs Lloyd Noel and Alister Hughes.
(108 words)
ARARMAEF BLUHDELL DIE8
According to a release from the University Centre of the
University of the West Indies (UWI) in St.Georges, Miss Margaret
Spencer Blundell BA (Oxon) MA (Cantab), former Extra Mural Tutor
of the UWI for Grenada, died suddenly in the United Kingdom. It
is reported that the death took place on October 5th and the body
was cremated on October 15th.
Miss Blundell was UWI Resident Tutor in Grenada from 1960 to 1970
and the UWI release says it was through her "instrumentality that
T A Marryshow's home, 'The Rosary', was purchased by the
University of the West Indies to be maintained as a memorial to
this great statesman, and the headquarters of the UWI in Grenada."
Miss Blundell was active in the field of the Fine and Creative
Arts, and one of her 'discoveries' was Mr Fitzroy Harraoh who,
through her efforts, obtained financial support to study at the
Jamaica School of Art. Mr Harrack is now a tutor at that
institution. (continued)
Alister Hughes
THE GRMIADA NEWSLETTER Week lading 29.10.77
Pane 11
Mr Eric Pierre, president of CIWU, told NIbILaTER this afternoon
that two more workers were dismissed by the Company today.
"ThI rdkes a total of five dismissed workers since the go-
slow lt*i "s he said, "four from the Food Fair Supermarket and one
flxm the Motor Department, and my union is firm that, unless these
workers* Mf pfthtated, the industrial action against the Company
will contateto, .,
(266 words)
ANTI-CHILE SIGNS
In the suburbs and outskirts of St.Georges a great number of anti-
Chile signs appeared this morning (28th) painted up on walls and
on the surface of streets.
It would appear that these signs were painted up during last night
and some of them, seen by NEWSLETTER, need "Break With Chile",
"Grenada Yes! Chile No i", "Chileans Get out I" and "Butchers
from Chile Get Out Now !"
During this morning, a team of plain clothes policemen was
employed painting out these signs. Photographing these signs
and the painting out operation, NEBSLETTER was suabect to abuse
and threats from these policemen; there was no violence.
(102 words)
CRUISE LINER CALLS
During the week ending October 22nd, only one cruise liner called
at Grenada. This was the "Cunard Countess" which berthed on
Tuesday 18th with 681 passengers.
(26 words)
BANANH SHIPMEhTS
The S S "Geeststar" sailed on October 25th with 20,332 boxes of
bananas weighing 601,137 Ibs. The price paid by Geest
Industries to the Grenada Banana Cooperative Society (GBCS) is not
yet available but the price paid to growers by GBCS was 15 EC%
per pound. This was on the weight of bananas received at the
boxing plants but this figure is not yet available.
The weight shipped by "Geesttide" on October 18 was 623,803 Ibs,
continuedd)
Alister Hughes
THE GRENADA NEVWaLETTER eel Ending 29.10.77
page 12
and the weight at the boxing plants was 643,623 Iba, making
a difference of 19,820 Ibs between the boxing plant weight
and the shipped weight.
Full Text | http://ufdc.ufl.edu/AA00000053/00141 | CC-MAIN-2015-48 | refinedweb | 3,795 | 57.3 |
Problem Statement
Given a sorted array arr[] consisting of N distinct integers and an integer K, the task is to find the index of K, if it is present in the array arr[]. Otherwise, find the index where K must be inserted to keep the array sorted.
Example
- Input: A[]={1,2,3,4,5} k=4
Output:3 //4 is at 3rd position
- Input: A[]={1,2,4,5} k=3
Output:2 //2 is the index position where k must be inserted to keep the array sorted
Approach
Before we discuss the approach for this question let’s see what exactly the question requires us to do. It seems that we have to find the index position of the sorted array where k is present. In case, if k is not present in the array then we need to find the position where k must be inserted to keep the array sorted.
The basic approach can be to first traverse through the whole array if k is found at any index position return index. Otherwise, if the array element is greater than k then insert k at that position to keep the array sorted and return index.
Pseudocode
function search(array,k) for i in 0 to array.length if array element is equal to k return i if array element is greater than k return i
- Make a function named search and, input an array and k.
- Traverse through the array and check whether the element is equal to k or not.
- If the element is equal to k then return index position of the element.
- And if the element is greater than k then also return the index position of the element to insert the value of k at that position for keeping the array sorted.
Program
package codesdope; import java.util.*; public class codesdope { public static int search(int arr[],int x) { for(int i=0;i<arr.length;i++) { if(arr[i]==x) return i; if(arr[i]>x) return i; } return -1; } public static void main(String[] args) { int arr[]= {1,2,3,4,5,6,8,9}; System.out.println(search(arr,2)); System.out.println(search(arr,7)); } }
1
6
Let us dry run the above code-
arr[]={1,2,3,4,5,6,8,9}
- When K=2
i=0 arr[0]!=2
i=0 arr[0] is not greater than 2
i=1 arr[1]=2
return 1
OUTPUT=1
- When k=7
i=0 arr[0]!=7
i=0 arr[0] is not greater than 7
i=1 arr[1]!=7
i=1 arr[1] is not greater than 7
i=2 arr[2]!=7
i=2 arr[2] is not greater than 7
i=3 arr[3]!=7
i=3 arr[3] is not greater than 7
i=4 arr[4]!=7
i=4 arr[4] is not greater than 7
i=5 arr[5]!=7
i=5 arr[5] is not greater than 7
i=6 arr[6]!=7
i=6 arr[6] is greater than 7=return 6
Output=6
Time Complexity
Since we will be traversing the whole array therefore the time complexity of algorithm would be O(N). | https://www.codesdope.com/blog/article/search-insert-position-of-k-in-a-sorted-array/ | CC-MAIN-2021-49 | refinedweb | 535 | 70.33 |
Haskell For Kids: My Project!
Oops! I almost forgot to do my own homework from my Haskell for Kids class!
Here’s the program I wrote (intentionally using only the features we’d already talked about in class.)
import Graphics.Gloss picture = pictures [ color lightGray (rectangleSolid 500 500), translate 0 (-75) (color darkGray mouse), translate 90 (150) thoughtBubble ] darkGray = light black lightGray = dark white thoughtBubble = pictures [ (scale 100 50 filledCircle), translate ( -85) ( -15) (scale 0.4 0.4 (text "I wonder...")), translate ( -85) ( -65) (scale 20 20 filledCircle), translate (-120) (-110) (scale 15 15 filledCircle) ] filledCircle = pictures [ color white (circleSolid 1), color black (circle 1) ] mouse = pictures [ scale 2 1 (circleSolid 50), translate (-100) ( 40) mouseHead, translate ( 120) (-20) mouseTail, translate ( -40) (-55) mouseLegs, translate ( 20) (-55) mouseLegs ] mouseHead = pictures [ translate 20 35 (color rose (circleSolid 20)), translate 20 35 (thickCircle 5 40), rotate (-10) (scale 1.5 1 (circleSolid 40)) ] mouseTail = (rotate (-15) (rectangleSolid 100 5)) mouseLegs = pictures [ rectangleSolid 5 35, translate 20 0 (rotate 10 (rectangleSolid 5 35)) ]
And the result:. | https://cdsmith.wordpress.com/2011/08/23/haskell-for-kids-my-project/?like=1&source=post_flair&_wpnonce=2221fbe573 | CC-MAIN-2015-32 | refinedweb | 173 | 55.68 |
Opened 11 years ago
Last modified 10 years ago
#10059 new defect
prop.rendered not properly set in browser.html
Description
Hi,
despite having the wiki_properties set to a property that is rendered correctly as wiki syntax, the prop.rendered in browser.html is not set. This causes everything to be displayed not in the py:when case of a span and a div.
Attachments (0)
Change History (5)
comment:1 by , 11 years ago
comment:2 by , 11 years ago
Mmh… I'm afraid I don't understand what this is about. Would you mind explaining in a bit more detail, and giving a bit more context? Thanks.
comment:3 by , 11 years ago
Well, it seems to work for me:
- what is the name of the property you're using?
- what is the value of your
[browser] wiki_properties?
Also, are you sure you're using the correct browser.html template? (TracUpgrade#CustomizedTemplates)
comment:4 by , 11 years ago
The relevant part from browser.html (I have freshly installed a 0.12.2) is:
the property is naemd trac:description and is contained in the wiki_properties setting (and it also is correctly rendered as wiki syntax, however not in the span+div where it should be)
<tr py: <td colspan="2"> <ul class="props"> <py:def <py:choose> <py:when<em><code>$prop.value</code></em></py:when> <py:otherwise>$prop.value</py:otherwise> </py:choose> </py:def> <li py: <py:when <span py: <div py: </py:when> <py:otherwise> <i18n:msgProperty <strong>$prop.name</strong> set to ${prop_value(prop)}</i18n:msg> </py:otherwise> </li> </ul> </td> </tr>
If I understand this correct, the li item is output as a span + div when prop.rendered is populated. However it is not populated here, causing the html output to be:
<li> Property <strong>trac:description</strong> set to <p> Test description property should be rendered in wiki syntax </p> <ul><li>top 1 <strong> tops2 </strong></li></ul> </li><li>
for me. I think is is caused by browser.py having the function
def render_property(self, name, mode, context, props): if name in self.wiki_properties: return format_to_html(self.env, context, props[name]) else: return format_to_oneliner(self.env, context, props[name])
I think this function should return a RenderedProperty because only in that case the render_property method of class BrowserModule sets prop.rendered :
rendered = renderer.render_property(name, mode, context, props) if not rendered: return rendered if isinstance(rendered, RenderedProperty): value = rendered.content else: value = rendered rendered = None
So I think this function should be like:
def render_property(self, name, mode, context, props): if name in self.wiki_properties: value = format_to_html(self.env, context, props[name]) return RenderedProperty(name=name,content=value) else: return format_to_oneliner(self.env, context, props[name])
or probably even return a RenderedProperty in the else case. Otherwise it is not possible to distinguish between normal properties, and properties that have been rendered as wiki markup in the browser.html template.
Here is the html that is returned after doing this change on my side:
<li> <span>trac:description</span> <div><p> Test description property should be rendered in wiki syntax </p> <ul><li>top 1 <strong> tops2 </strong></li></ul></div>
As you can see, this now does the span+div for the property that the browser.html is doing when the property is rendered…
comment:5 by , 10 years ago
Ok, see what you mean now. Thanks for the details!
I am not quite sure, but it seems that in WikiPropertyRendered when I set in the function render_property this code in the if case:
value = format_to_html(self.env, context, props[name]) return RenderedProperty(name=name,content=value)
then everything is fine… maybe there are also other places where it has been set in a wrong way? maybe also the oneliner case should return that? | https://trac.edgewall.org/ticket/10059 | CC-MAIN-2022-05 | refinedweb | 636 | 56.55 |
As in the other tutorials, you should read this while running an interactive Lisp session. The forms should be evaluated one after another from the top of the tutorial to the bottom.
Freetext indices provide a way to quickly retrieve triples containing a given word, allowing queries similar to those understood by Web search engines.
Setting Up
We start by creating a triple store with a text index:
> (create-triple-store "fti") > (create-freetext-index "index1")
And add a few triples:
> (enable-!-reader) > (register-namespace "ex" "") > (add-triple !ex:Nietzsche !ex:quote !"We have art in order not to die of the truth.") > (add-triple !ex:Picasso !ex:quote !"Art is the lie that enables us to realize the truth.") > (add-triple !ex:Wilde !ex:quote !"All art is quite useless.")
Basic Queries
The words in the subjects of each of these triples are indexed. We can now use freetext-get-triples to query them. This function returns a cursor, from which we just collect the subjects:
> (collect-cursor (freetext-get-triples "art") :transform 'subject) ({Nietzsche} {Picasso} {Wilde})
Here, the query is simply
"art". The index is case-insensitive, so the triple where the word 'art' is capitalized is also found, and a query of
"ART" would have returned the same results.
More complicated queries can be made by combining queries with boolean operators:
> (collect-cursor (freetext-get-triples '(and "art" "truth")) :transform 'subject) ({Nietzsche} {Picasso}) > (collect-cursor (freetext-get-triples '(or "useless" "die")) :transform 'subject) ({Nietzsche} {Wilde})
To match an exact phrase, a query can use the
phrase operator, as in:
> (collect-cursor (freetext-get-triples '(phrase "art is")) :transform 'subject) ({Picasso} {Wilde})
Note that
and and
or allow any kind of query to appear as their argument, so
(and (phrase "art is") "truth") is a valid query.
Wild-card and Fuzzy Matching
Freetext indices support wild-card matching with the
match operator:
> (collect-cursor (freetext-get-triples '(match "reali?e")) :transform 'subject) ({Picasso})
The string given to
match can contain question marks to match single characters, or asterisks to match any number of characters. Be aware that when an asterisk appears early in the string, as in
"*tastic", it is impossible for AllegroGraph to make effective use of the text index, and the query will be slower.
Another form of non-exact matching can be done with the
fuzzy operator. This one matches all words within a given Levenshtein distance (or edit distance) of the given term. The Levenshtein distance between two strings is the amount of edits that have to be made to change the one into the other, where an edit is the insertion or deletion of a single character, or the replacement of a character with another one.
(fuzzy "realise" 1) will match
realise,
realize,
realist, and so on. If the distance argument is omitted, one fifth of the word length (rounded up) is used.
Indexing By Predicate
The create-freetext-index function takes a lot more arguments than just a name. It is, for example, possible to only index triples that have certain predicates:
> (create-freetext-index "name-index" :predicates (list !ex:name)) > (list-freetext-indices) ("name-index" "index1")
This new index will not index the triples with the
!ex:quote predicate. But it will index these:
> (add-triple !ex:Nietzsche !ex:name !"Friedrich Wilhelm Nietzsche") > (add-triple !ex:Picasso !ex:name !"Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso")
freetext-get-triples takes an
:index keyword argument that is used to only use a single index. If not given, all existing indices in the store are used.
> (count-cursor (freetext-get-triples "art")) 3 > (count-cursor (freetext-get-triples "art" :index "name-index")) 0 > (count-cursor (freetext-get-triples "Pablo" :index "name-index")) 1
Ignored Words
Several of our quotes contain the word
is, yet when we search for that, we get nothing:
> (collect-cursor (freetext-get-triples "is") :transform 'subject) ()
To prevent the index from containing useless information, by default all words shorter than three characters are ignored. On top of that, there is a stop-word list, the content of which is also ignored. By default this list contains a number of common English words. If we are interested in short and common words, we can create an index that indexes them:
> (create-freetext-index "index2" :min-word-size 2 :stop-words ()) > (collect-cursor (freetext-get-triples "is") :transform 'subject) ({Picasso} {Wilde})
Indexing Subject, Predicate, and Graph Fields
create-freetext-index also allows control over which part of a triple are indexed. The
:index-fields argument defaults to
(:object), meaning only the object is indexed, but may contain any of
:subject,
:predicate,
:object, and
:graph. Our subjects and predicates contain resources, not literals, though, and by default those are not indexed. We can set
:index-resources to
T when creating an index to fix this. Another possible value is
:short, which will cause only the part of the resource after the last
/ or
# to be indexed.
> (create-freetext-index "sp-index" :index-fields '(:subject :predicate) :index-resources :short) > (collect-triples (freetext-get-triples "Nietzsche" :index "sp-index")) (<Nietzsche quote We have art in order not to die of the truth.> <Nietzsche name Friedrich Wilhelm Nietzsche>)
A similar setting exists to control the indexing of literals,
:index-literals. This defaults to
T, causing all literals to be indexed, but can be set to
nil to not index literals, or to a list of resources to index only literals with the given types.
> (create-freetext-index "typed-literals" :predicates (list !ex:test) :index-literals (list !ex:indexme)) > (add-triple !ex:A !ex:test !"hello") > (add-triple !ex:B !ex:test !"hello"^^ex:indexme) > (collect-cursor (freetext-get-triples "hello" :index "typed-literals") :transform 'subject) ({B})
Indices can be deleted at any time with drop-freetext-index:
> (drop-freetext-index "typed-literals") > (list-freetext-indices) ("sp-index" "name-index" "index1")
Searching from SPARQL
The AllegroGraph SPARQL engine defines a 'magic' predicate
<>, which can be used to generate bindings for the subjects of triples that match a given freetext query.
> (sparql:run-sparql "PREFIX fti: <> SELECT ?x WHERE { ?x fti:match 'remedios' }" :results-format :lists) (({Picasso})) :select (?x)
This matches only
ex:Picasso because the only triple in which
"remedios" occurs is Picasso's name.
Textual Query Syntax
To express more complicated queries, the
fti:match predicate understands a simple language, where multiple words mean
and, and a pipe character means
or.
> (sparql:run-sparql "PREFIX fti: <> SELECT ?x WHERE { ?x fti:match '(art | truth) useless' }" :results-format :lists) (({Wilde})) :select (?x)
Furthermore, double-quotes around a piece of text can be used to express the
phrase operator, which matches only triples that contain the whole phrase. Wild-card matching can be done simply by including question marks and asterisks in words, and fuzzy matching is done by appending a tilde (
~) to a word, optionally followed by a maximum edit distance.
The function
ag.text-index:parse-query transforms such a textual query into an S-expression.
> (ag.text-index:parse-query "allegro* \"freetext index\" (fuzzy | levenshtein~2)") (:and (:match "allegro*") (:phrase "freetext index") (:or "fuzzy " (:fuzzy "levenshtein" 2))) | http://franz.com/agraph/support/documentation/current/freetext-lisp-tutorial.html | CC-MAIN-2014-52 | refinedweb | 1,196 | 51.28 |
Hi,
I am a complete newbie and these are the first lines of code that I write, so please be patient :(
I need to create a C# exe file that calls a c++ function
What I wrote always throws an error at runtime:
Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at DDecrypt.DefaultDecrypt(String s) at DDecrypt.Main()
The function it calls is in a C++ dll ()closed source code); exploring the exposed functions I found:
1. void CLSCrypt::default constructor closure(void) 2. CLSCrypt::CLSCrypt(bool,class _bstr_t const &) 3. class _bstr_t CLSCrypt::DecryptString(class _bstr_t const &,enum STRCODING,bool) [B]4. class _bstr_t CLSCrypt::DefaultDecrypt(class _bstr_t const &)[/B] 5. class _bstr_t CLSCrypt::DefaultEncrypt(class _bstr_t const &) 6. void CLSCrypt::Delete(void) 7. class _bstr_t CLSCrypt::EncryptString(class _bstr_t const &,enum STRCODING) 8. class _bstr_t CLSCrypt::GetBase64(unsigned char *,unsigned long) 9. bool CLSCrypt::GetFromBase64(class _bstr_t const &,unsigned char * &,unsigned long &) 10. bool CLSCrypt::GetFromHEX(class _bstr_t const &,unsigned char * &,unsigned long &) 11. class _bstr_t CLSCrypt::GetHEX(unsigned char *,unsigned long) 12. class _bstr_t CLSCrypt::GetMD5String(unsigned char *,unsigned long,enum STRCODING) 13. void CLSCrypt::SetAccess(class _bstr_t const &,unsigned long) 14. CLSCrypt::~CLSCrypt(void)
the function I want to call is
class _bstr_t CLSCrypt::DefaultDecrypt(class _bstr_t const &)
and it's mangled code is
?DefaultDecrypt@CLSCrypt@@SA?AV_bstr_t@@ABV2@@Z
to test the import of the function I wrote:
using System; using System.Runtime.InteropServices; public class DDecrypt { [DllImport("LSUtils.dll", EntryPoint = "?DefaultDecrypt@CLSCrypt@@SA?AV_bstr_t@@ABV2@@Z", ExactSpelling = true)] public static extern String DefaultDecrypt(String s); public static void Main() { String s = "this is a test!"; String y = DefaultDecrypt(s); Console.WriteLine(y); } }
when I run it the error is the one above;
I suppose it is due to the data type that I want to pass (a String while the function wants something else)
I tried to import the function as
public static extern String DefaultDecrypt([MarshalAs(UnmanagedType.BStr)] String s);
or even to mark the code as 'unsafe' and use as input
IntPtr ptr = Marshal.PtrToStringBSTR(s);
but the error is always the same;
Could somebody help me with this?
Thanks | https://www.daniweb.com/programming/software-development/threads/267596/how-to-pass-parameter-bstr-t-cons-to-invoke-a-dll-written-in-c | CC-MAIN-2017-30 | refinedweb | 372 | 50.94 |
This is very important interview question. Lets get one by one.
FinalFinal is a non access modifier that can be used with a variables, methods and also classes.
Final variables
- Final variables cannot be changed, because they are constants.
- Look at the following code, it won't compile.
public class Fruits { public static void main(String[] args) { final int MAX_DIS = 5; MAX_DIS = 10; System.out.println(MAX_DIS); } }
Final methods
- Final methods cannot be overridden.
- Following code won't compile.
public class Fruits { final void eat() { } } class Apple extends Fruits { void eat() { } }
Final classes
- Final classes cannot be extended.
- Following code won't compile.
public final class Fruits { } class Apple extends Fruits { }
Finally
- This is always associated with try-catch block.
- It is normally used to handle any clean up codes like closing a file, closing a database to prevent resource leak.
- This finally block runs every time either exception happen or not.
- Look at the following example.
public class FinallyBlockDemo { public static void main(String args[]) { double num1=0, num2=0; Scanner scn = new Scanner(System.in); try{ System.out.print("Enter number one: "); num1 = scn.nextInt(); System.out.print("Enter number two: "); num2 = scn.nextInt(); double div = num1 + num2; System.out.println("Answer is : " + div); }catch(InputMismatchException e){ System.out.println("Please enter number only."); }finally{ System.out.println("Service stopped."); } } }
Finalize()
- This method is presented in Object class (Java.lang.Object.finalize())
- This is called by the Garbage collector just before destroying any object.
- If it determines any object without a reference, it calls this method anytime.
- This is used to perform clean up activities related to any object without reference.
- This cleanup activities means, if the object is associated with any database connection, finalize() method is used to disconnect that connection.
Final Vs Finally Vs Finalize()
Reviewed by Ravi Yasas
on
11:41 AM
Rating:
This is the right webpagye for anybody who hopes to find out about this topic.
Yoou understand so much its almost tough to argue with youu (not that I actually would want to…HaHa).
Yoou certainly put a new spin oon a subjet which has been written aboput for a long time.
Wonderful stuff, jusst wonderful! | http://www.javafoundation.xyz/2016/04/final-vs-finally-vs-finalize.html | CC-MAIN-2021-43 | refinedweb | 364 | 51.95 |
PRObooks
ASP.NET Programmer s Reference
ASP.NET Programmer s Reference is an enormous book. It offers readers nearly 900 pages of information on ASP.NET. There is lots of technical information, but very little in the way of style or imagination in this book. As you might expect, a lot of the material appears to be borrowed from Microsoft s .NET Framework Reference. So you get lots of summary descriptions of object properties and methods.
The book should be subtitled, Using VB .NET. Wrox has separate VB .NET and C# titles for their Beginning ASP.NET series. If you are a C# programmer or a Java programmer interested in JScript .NET, put away your credit card this book has nothing to offer you.
However, I like the division of the book into four sections. The first is a forced march through the ASP.NET namespaces. That s where you get the details on ASP.NET objects. The second section covers important server topics, like caching, configuration, and security. The third is a close look at building Web Services. The final section covers data access with ADO.NET and XML.
The book is more than a little schizophrenic. It seems the editors at Wrox could not decide whether they wanted an introduction to ASP.NET or an ASP.NET namespace reference. As a consequence, the book is mediocre at both. It spends far too much time walking the reader through code examples, telling them how the code works. That s the place for books like Beginning ASP.NET, Using VB .NET, and Professional ASP.NET. A trimmed down book, which provides the necessary details on the classes in the ASP.NET and useful .NET namespaces, would be better received. I m talking about the sort of book that programmers dog-ear. Wrox has published them before, on topics like dHTML.
The other major blemish is common to recent Wrox titles: blatantly bad editing. There are more than the conventional number of typos, grammatical errors, and poor prose in this book. You ll also run into occasional notes about class details being omitted. That s what happens when you rush books into print based on beta releases.
There is a lot of information in ASP.NET Programmer s Reference. If you want a reference with details about the ASP.NET namespaces, this is a good addition to your library. If you want to learn ASP.NET programming skills, look at Wrox s other titles, Beginning ASP.NET, Using VB .NET, and Professional ASP.NET.
Glenn E. Mitchell II, Ph.D.
ASP.NET Programmer s Reference by Jason Bell, et al., Wrox Publishing,.
Rating:
ISBN: 186100530X
Cover Price: US$39.99
(950 pages) | https://www.itprotoday.com/net/aspnet-programmer-s-reference | CC-MAIN-2019-04 | refinedweb | 451 | 71 |
Sending and receiving information to the UDP Client in Java
will provide send and receive information by
the UDP client in Java. UDP client... Sending and receiving information to the UDP Client in Java... to be send into the UDP
server and it also sends a message to UDP client in the text
UDP Client in Java
UDP Client in Java
... or
messages for UDP server by the UDP client. For this process you must require..., then you
send messages to UDP server. The sending process has been defined just
Receiving and sending a request to UDP server in Java
will know how to receive and send messages by
UDP server. First of all, UDP server receives messages and sends some
information to UDP client. The brief... Receiving and sending a request to UDP server in Java
UDP - User Datagram Protocol
;Here, you will provide send and receive information by the UDP client... a
request to UDP server in Java
Here, you will know how to receive and send...;
Multicast Client in Java
This section describes how to send and receive
UDP Server in Java
be send or
receive.
getPort():
This method returns the port number of the host... UDP Server in Java
... of UDP server. This section provides you the brief description
and program
Java UDP
be
used to send datagrams, using Datagram Sockets, to one another i.e. from client... with a short response.
Read more at:
http:/...
Java UDP
Multicast under UDP(client server application)
Multicast under UDP(client server application) UDP is used to support mulicast. Recall that UDP is connectionless and non reliable.
Hence... in the
following) and RMulticastClient (called client in the following) in Java
Multicast Client in Java
UDP Multicast Client in Java
... to send and receive the IP
packet or message by multicast client. Here, we provide...') and port number(5000). Those of any client sends
and receives IP packet
J2ME -- Stream video from a udp server - MobileApplications
address of the server and the port no. to connect to and using the UDP protocol..For example from the following url :
udp://222.222.222.121:2211...J2ME -- Stream video from a udp server HI,
I wanted to develope
Image transfer using UDP - Java Beginners
getting is that I can transfer only text files properly.The file transfer is using UDP. I have used core java technologies like JFC,JDBC,UDP.
My main...Image transfer using UDP Hello
I am new to Java.Currently I am
A programm for error free transmission of data using UDP in java
A programm for error free transmission of data using UDP in java Hi... and receiver).
The program will use UDP for transmission.
Error-free transmission... engineering college. I've got a project in java programming that should be submitted
Open Source Chat
Open Source Chat
Open Source Chat Program
FriendlyTalk is a simple chat program offering the standard features of a chat client. FriendlyTalk allows... by Microsoft .NET (v 1.1.4322). Unlike traditional chat systems, which use UDP or TCP
Construct a DatagramSocket on an unspecified port
Construct a DatagramSocket on an unspecified port... provide a
complete example based on the method for creating the DatagramSocket... and
initialize a object Client of DatagramSocket. Then we call DatagramSocket()
method
Getting the Local port
;}
}
The output of the above example is as under in which
the program display all the local port of the server.
Here is the Output of the Example :
C... on port 1199.
Download of this example.
send data thru parallel port - Java Beginners
send data thru parallel port Hi,
I want to print send data to printer thru parallel port.How can it be possible thru java code
Low port Scanner
classes are used to
represent the connection between a client program...;
Output of this program:
C:\rose>java LowPortScanner... Low port Scanner
UDP (User Datagram Protocol)
) that is used in client/ server programs like videoconference systems expect
UDP...
UDP (User Datagram Protocol)
The User Datagram Protocol (UDP) is a transport protocol
Overview of Networking through JAVA,Getting the Local port
;}
}
The output of the above example is as under in which
the program display all the local port of the server.
Here is the Output of the Example....
There is a server on port 1199.
Download of this example.
message sending and receiving using UDP TCP in J2ME
message sending and receiving using UDP TCP in J2ME I need the simple program for message sending and receiving using UDP TCP in J2ME. Could u pls
Local Port Scanner
Local Port Scanner
This is a simple program of java network. Here,
we are going to define.... Socket
classes are used to establish a connection between client program
Listening port 8080
Listening port 8080 I need a java code through which i can listen the http port 8080 ... i want to interpret the incoming http request on 8080 port. Please help
Setting source port on a Java Socket?
Setting source port on a Java Socket? Is it possible to explicitly set the the source port on a Java Socket
MySQL default port number in Linux
to the mysql using this then use the following
sql command to find out the port number...MySQL default port number in Linux - How to find the default port number of
MySQL in Linux?
If you are using MySQL database server installed on the Linux
Multicast Server in Java
or messages from multicast
client, which has the port number '5000' and IP(Internate... port number at the specified host.
Here is the code of program...;buffer.length, client, client_port);
socket.send(packet);
program for chat - Java Beginners
program for chat please provide code for fallowing
1)Write a java program Multi-user chat server and client
listening http port 8080 - Java Server Faces Questions
listening http port 8080 Hi Sir
I just need java code through which i can listen the http port 8080 ... i want to interpret the http request before it reaches the server and i also want to know how to send it back
How to send sms alerts to mobile using java?
How to send sms alerts to mobile using java? Hi i have used the Code to send message on mobile using java code, it is not working -
COM2...(CommPortIdentifier.java:105)
confusion is there what is COM2 port and how
Java Read username, password and port from properties file
Java Read username, password and port from properties file
In this section, you will come to know how to read username, password and port no from... to a stream or loaded from a stream using the properties file
MySQL default port number and how to change it
database server by providing the port number in your
program.
You can also...What is default port number of MySQL Database server?
This tutorial will explain you about the default port number used by the
MySQL Server
Ajax Chat
chat messages.
People from all over the world can now be in the same chat using... the example link from the menu.
The Metatron Chat Engine is a web-based Chat... driven web chat program. This will be a very simple program, but will be expanded
Chat in Java wih GUI
Chat in Java wih GUI Welcome all >> << how is everybody >< i wanna Chat program in java server & client
thanks
Locking Issues
;
MySQL can manage the contention for table contents by using Locking... themselves which program may access the tables at which time.
Internal Locking... required different lock types.
For deciding this, if you are using storage engine
UDP - User Datagram Protocol
What is Java Client Socket?
What is Java Client Socket? Hi,
What is client socket in Java... socket connections. To get connection in client socket in java used the class... class passing two arguments as hostName and TIME_PORT. This program throws
serial port communication using bluetooth in j2me
serial port communication using bluetooth in j2me how to make serial port communication using bluetooth in j2me ?
what r prerequisites
JSP-Servlets-JDBC
..
It will be helpful if it's made into sub modules,
JSP,
Driver Constants,
Servlets,
Java Beans...JSP-Servlets-JDBC Hi all,
1, Create - i want sample code... should be retrieved in JSP and should be able to update using JSP and update
datagram - JSP-Servlet
datagram program that sends "hello" message to a machine named "yahoo" on port"4000" using datagram Hi Friend,
Please visit the following link:
net beans
net beans Write a JAVA program to validate the credit card numbers using Luhn Check algorithm. You will need to search the Internet to understand how the algorithm works.
Hi Friend,
Try the following code:
import
Axis2 client - Axis2 Client example
\client\src>javac net/roseindia/*.java
Note: net\roseindia... for details.
To run the client type following command:
java net/roseindia/Test... for details.
E:\Axis2Tutorial\Examples\HelloWorld\client\client\src>java net
Which is better .Net or Java?
compile a program and compile it to a .Net executable.
Java supports a Connected...When a developer needs to choose between Java or .NET to develop a new... program that can run on any platform. The only requirement is Java Virtual Machines
Chat Server
(which runs on client side). This application is using for chatting in LAN... are using some core java features like swing, collection, networking,
I/O Streams...
Chat Server
java chat application
java chat application Can u plz send me java code for Lan communicatino(chat
how to execute jsp and servlets with eclipse
how to execute jsp and servlets with eclipse hi
kindly tell me how to execute jsp or servlets with the help of eclipse with some small program... to create a java project in Eclipse and how can you add additional capabilities
multi user chat server - Java Beginners
multi user chat server write a multi chat server and client with step by step explanation?
please send me this source code to my mail id with step by step explanation
Java Client Application example
Java Client Application example Java Client Application
send without authentication mail using java - JavaMail
send without authentication mail using java Dear All,
I am tring to send simple mail without authentication using java.
is it possible, could i send email without authentication.
I have written following code.
public
Live Chat Application using java
Live Chat Application using java I want to develop Live Chat Application for my web site, please help me by posting the code or by giving any suggestion, by which i can develop it....
Thanks in advance
Java Programming: Section 10.4
out.println() to send a line of
data to the client, the server program calls.... UDP is supported in Java, but
for this discussion, I'll stick to the full TCP....)
For two programs to communicate using
TCP/IP, each program must create
proxy server and client using java - Java Beginners
proxy server and client using java how to write program in java for proxy server and client
net beans2
net beans2 Write a JAVA program to find the nearest two points to each other (defined in the 2D-space
net beans
net beans Write a JAVA program to read the values of an NxN matrix and print its inverse
How to send and recieve email by using java mailserver
How to send and recieve email by using java mailserver How to send and receive email by using java mail server?
Hi Friend,
Please visit the following link:
Java Mail
Thanks
send mail using JavaScript
send mail using JavaScript How can we send mail using JavaScript?
Hi friends,
You can not send email directly using JavaScript.
But you can use JavaScript to execute a client side email program send the email using
java chat using java media
java chat using java media Remove the error of this code plz reply me fast on my
mail ID **Deleted by Admin**
package chating;
//import java.applet.Applet;
import java.awt.event.*;
import java.awt.*;
import
Multicasting in Java - java tutorials,tutorial
:
Multicast in Java
UDP Multicast Client in Java
Multicast Server.... If we are not using
Multicast, then we need to send the data packets one for each...Multicasting in Java
In a datagram network, Multicast is the transmission
Java FTP Client Example
Java FTP Client Example How to write Java FTP Client Example code?
Thanks
Hi,
Here is the example code of simple FTP client in Java which downloads image from server FTP Download file example.
Thanks
Construct a DatagramPacket to receive data
Construct a DatagramPacket to receive data
... of a example
based on the method DatagramPacket(buffer, buffer.length...)
method for creating a datagram packet to receive data
RMI Client And RMI Server Implementation
on it.
Program
description:
In this section, you will learn how to send massage from...:\rose>java RmiServer
this address=roseindi/192.168.10.104,port=3232t... RMI Client And RMI Server Implementation
Client Server Java app
Client Server Java app I developed a client server based java networking Instant Messaging app. The client program is needed to be run on the client computer whereas the server program is on server computer. This works in my
port number - JavaMail
port number i want james2.1.3 port number like
microsoft telnet>open local host _______(some port number)
i wnat that port number
about networking program - Java Beginners
about networking program Dear Sir,
i'm programing client side... that those data have to be transmitted to udp server in this case what kinda... during transmiting, and also i have to receive data in hex and ascii format so what
Changing MySQL Port Number
Changing MySQL Port Number How to change the MySQL port... installation directory. Change the port number to desired port number.
By default port no entry will look like:
port=3306
You can change the value of port
Client Socket Information
Client Socket Information
In this section, you will learn how to get client
socket information. This is a very simple program of java network. Here, we have
MySQL Default Port number
MySQL Default Port number HI,
What is MySQL Default Port number?
Thanks
Hello,
MySQL Server Default Port number is 3306.
Thanks
java chat?
java chat? which is the best tool for java web chat ? Is comet a good techonology ? what about its performance with tomacat 6
net beans
net beans Write a JAVA program to parse an array and print the greatest value and the number of occurrences of that value in the array. You can initialize the array random values in the program without the need to read them
Which language should you learn Java or .NET?
If you are confused about which language should you learn Java or .NET, well... libraries to develop program that can run on any platform. To run Java, a computer...
Creating multimedia applications by using authoring tools
About .Net:
.NET
A Message-Driven Bean Example
source files.
To deploy the application and run the client
using... A Message-Driven Bean Example
Introduction
Session beans allow you to send JMS
java client server program for playing video file(stored in folder in the same workspace) using swings
java client server program for playing video file(stored in folder in the same workspace) using swings Hello friends this is RAGHAVENDRA, I am doing a client server program to play a video file, when I run both client and server
net beans
net beans Write a JAVA program to auto-grade exams. For a class of N students, your program should read letter answers (A, B, C, D) for each student. Assume there are 5 questions in the test. Your program should finally print
Send multipart mail using java mail
Send multipart mail using java mail
This Example shows you how to send multipart mail using java
mail. Multipart is like a container that holds one or more body
EchoClientSocket
the hostname port number of a local
machine. This program defines the IOException... in the program. Here, we are
giving a complete example named... and port
number. If the machine supports both arguments then Client socket
Java Mail Assimilator Example
Java Mail Assimilator Example
This Example shows you how to send
a message using javamail api. A client creates new
message by using Message subclass. It sets
Web Programmer
; designing on Java/PHP/.Net. platform.
Thorough knowledge of Linux... of Java/PHP/.Net.
Advanced database concepts.
Proven affinity with large..., Python is also used for the chat platform.
The client technology includes
chat server - JavaMail
chat server i want to develop chat server in java using smtp and pop3 protocol.plz tell me what i have to do.what are the APIs i have to use
Java programming or net beans - Java Beginners
Java programming or net beans Help with programming in Java?
Modify... 4
Write a program that creates an Array List of pets. An item in the list... on how I can create the program on a step by step basis or the solution would be even
Using MYSQL Database with JSP & Servlets.
Using MYSQL Database with JSP & Servlets.
... to connect to the database from
JSP page.
Send you queries... acceres the MYSQL
database. Here I am using MYSQL & tomcat | http://www.roseindia.net/tutorialhelp/comment/81944 | CC-MAIN-2014-35 | refinedweb | 2,865 | 63.8 |
Opened 3 years ago
Last modified 4 months ago
This was necessary for ticket:2867 but is useful enough on it's own to be a separate enhancement ticket.
Added module is django.test.http which contains all the code to bring up a slightly modified version of the regular django test server. The module is simple to use, you simply create a unit test that inherits from django.test.http.HttpTestCase?. That class contains setUp and tearDown methods that bring up the test server and kill it when the test is complete.
The django test server had huge issues with the in memory database, which was the test framework default. I brought this up on the dev list and saw that other had issues using the in memory database as well. I removed ':memory' from being the default database, although if TEST_DATABASE_NAME is set to ":memory:" it will still work. I modified the database cleanup code to account for both cases as well (Note: previously the cleanup code didn't account properly for regular sqlite databases, it now accounts for both).
path to resole this enhancement
Adding patch to subject.
Whoops.
fixed comment typo and improper print statement
Un-assigning from myself since this needs to be reviewed by a commiter.
How does this relate to the unit test system that has evolved since then?
I haven't looked at the code base since I wrote this patch. So my short answer would be that it doesn't relate at all and probably breaks.
If the new unittest system includes support for running a live server and making real requests then this patch is obsolete.
If the new unittest system doesn't have list server support then I should probably rewrite this patch to provide the support within the new framework.
Wow, Mikeal, that was a fast reply. Could you, as time permits, look whether this is still needed and put a little comment in the ticket?
Or perhaps Russell could take a short look, he should be able to spot it without looking ;-) I don't know the unit test so well. I'm only doing triage here.
The testing documentation doesn't seem to have changed since I wrote this patch.
Regardless I think I could do this a little more elegantly now. I also read about support for cherrypy's wsgi when installed which I think would be important to add since that big hack to kill the server isn't necessary using the cherrypy server. Eww, and I'm raising a string instead of a proper exception in there.
I'll try to find some time this week or next to work on this.
runtestserver command for manage.py
With the core-management.patch it would be possible to start a test server like './manage.py runserver' with the command './manage.py runtestserver'. It would be setup a test-db like with './manage test' and loaded fixtures called 'initial_data' and 'test_data'.
testing contrib to flush database during tests
Together with the core-management.patch it would be possible to flush a running live test server between tests.
Confirmed that it worksforme on SVN HEAD of unicode branch.
Just, changes made on django/test/utils.py by http_test_case.2.diff patch has to be reverted, as database syssetm changed meantime.
I'd be very very glad to see this patch being checked in ASAP as I'm using it on all my projects already.
All patches in one, with reverted utils.py
I believe this was fixed in [5912] with the addition of manage.py testserver.
This is a little different to manage.py testserver. The testserver command is about getting a server that uses test data; this ticket is about starting a server during a test case so that a test case can poke AJAX-style methods, or write a view that in turn makes HTTP calls. This is needed if you are going to run Selenium-style tests, as the test case needs to interact with a live running server.
Is there a way to see this one in trunk?
Anything needs to be done, or any way how to speed it up?
The patch attached to this bug won't work in the current trunk.
I can write another one but after seeing what happened to the last patch I'd like to know that someone is actually going to review it before it becomes outdated like the last one.
Sorry, i was trying to apply the wrong patch.
Haven't tried the newest set.
Replying to Mikeal Rogers <mikeal@osafoundation.org>:
The patch attached to this bug won't work in the current trunk.
I can write another one but after seeing what happened to the last patch I'd like to know that someone is actually going to review it before it becomes outdated like the last one.
The patch attached to this bug won't work in the current trunk.
I can write another one but after seeing what happened to the last patch I'd like to know that someone is actually going to review it before it becomes outdated like the last one.
The requirements for this patch are the same as every other patch, as documented here:
In short:
I originally wrote this a long time ago (at least in web years). I've learned a lot since then and have mde my way through a dozen test frameworks and developed a few myself. I took a few hours out of my day today and try to re-assess the situation.
IMHO TestCase? is not the right place for this. TestCase? can only define setup and teardown for a single unittest class, this needs to define something that can be a product of the test environment that encompasses multiple test classes and suites.
It's been a while since I monkeyed around with unittest, but having done this in the past I know it's not easy and is one of the primary reasons people don't like unittest.
I also haven't looked at the test infrastructure for django in a long time, just a quick glance shows that a lot seems to have changed, so I don't know if this is even possible.
Eventually I'm going to need to integrate windmill () tests in to my own django project. When that work is finished I'll send a proposal to the list to see if it would be a valued contribution, but doing this right will probably require a new collector based framework with more acutely defined setup/teardown.
In the meantime, if anyone wants to take on this work again ( adding a TestCase? with live server support ) they are more than welcome to it, you can email me personally if you have any implementation specific questions. I'm more than happy to help I'm just not up for wrangling unittest anymore.
I'm more than happy to help I'm just not up for wrangling unittest anymore.
I'm more than happy to help I'm just not up for wrangling unittest anymore.
Well, it's not a problem to implement nose et al via settings.TEST_RUNNER option. However, even if SeleniumTestCase? (or equivalent) is discovered, connected to selenium proxy and browser is started successfully, tests are still not usable as they cannot connect to live server :]
(IMHO biggest flaw in unittest is absence of SkipTest?...)
Well, the last patch seems not to work in current trunk version. The structure of django/core has changed, and I see no obvious way of implementing the patch.
I've created my own patch which seems to do the job. At least it works for me. Probably you'll need Python 2.5 to run it and, of course, Linux (since the fork() method is not implemented in Windows, as I believe). I tested it only with SQLite and really don't know if it'll work with other backends.
The patch does a simple thing: after initializing environment and creating a test database, it forks and in child process a Django server is started. Being a child, it inherits the environment. In parent process, a test is started (which can include both Django TestCase and Selenium methods), and when it finishes the child is killed and the parent returns the result. The patch consists of two files, one for django/test/simple.py and another for django/test/utils.py.
One little thing to mention: I couldn't manage to make this idea work when I used :memory: database (which is default for SQLite in tests), thus I changed it to file. Then it worked.
Using this method you probably will not have to use testserver (see [5912]).
Any comments would be much appreciated.
The second part of the patch
Sorry, but I can't figure out why diff for the first file isn't displayed.
I have something working that will add this functionality to TestCase?. I'll clean up my patch, write some tests, and get something here today or tomorrow.
I've added this functionality to the TestCase? class.
The problem Mikeal mentioned with regards to in-memory databases () was not a problem with the database, only in the attempt to access an in-memory database within a separate thread. Because of the threading, we have to special case in-memory databases.
Tests and documentation changes are included.
Could somebody clarify for me what this does that can't be handled by, or built on top of, django-admin.py/manage.py testserver? Since that's implemented as a management command, it can be called from Python code as needed, which (if this feature is still needed to address a problem with testserver) looks like it'd greatly simplify this patch.
Yeah, directly calling testserver would greatly simplify things. The main problem with that, however, is that testserver isn't stoppable. After running a test, I need to be able to stop the server, reload fixtures, etc.
I agree though, I'm not happy with how much code I'm repeating. Suggestions welcome.
I'm working on improving the way this fails when problems starting a test server are encountered. I'll have a better patch shortly.
fixes way server handles error on startup
refactor and cleanup
One thing to note about my patch, while it's on my mind:
In order to be able to stop the running test webserver, I need to set a socket timeout on waiting for requests. eg:
def server_bind(self):
""" Sets timeout to 1 second. """
WSGIServer.server_bind(self)
self.socket.settimeout(1)
But what happens when you're waiting on a request that always takes longer than 1 second? You'll continually timeout? You'll fail on a selenium wait? Decidedly something quote unquote not good.
I don't know a clean solution to this. I need some kind of timeout. Otherwise, there's no way to stop mid handle_request(). And then no way to kill the server after a test is done. It'll just hang on that handle_request(). But any timeout is subject to problems if it needs to handle a request that's going to take longer.
# Loop until we get a stop event
while not self._stopevent.isSet():
httpd.handle_request()
And since there's no way to kill a python thread, there's no other way to stop this server.
We could have this timeout as a parameter and default to 1s. That way at least if people run up against the problem they can up the timeout and fix it.
Nevermind. The patch handles this.
That server_bind timeout is just for receiving requests. Once the server grabs a request it sets the socket for handling it to None.
sock, address = self.socket.accept()
sock.settimeout(None)
return (sock, address)
So requests can take however long.
Actually. The timeout should be a lot smaller then. 0.001s working fine for me.
Updated devin's patch with last changes in testing environment
I borrowed the code from the latest patch here to add Django support to windmill.
It hasn't been fully documented yet but the simplest case was written up for the email list.
Replying to anonymous:
Replying to mikeal:
I borrowed the code from the latest patch here to add Django support to windmill.
I borrowed the code from the latest patch here to add Django support to windmill.
I borrowed the code and approach too and added it as nose plugin, so it's usable for usual test cases too. It's part of django-sane-testing package, you can subclass from HttpTestCase? or add start_live_server attribute to you testcase class and server should be available.
untested patch for Django 1.0.2
By Edgewall Software. | http://code.djangoproject.com/ticket/2879 | crawl-002 | refinedweb | 2,137 | 73.17 |
Fwd: Perl SOAP::Lite and Apache Axis2 interoperability
Expand Messages
- ---------- Forwarded message ----------
From: Anne Thomas Manes <atmanes@...>
Date: Aug 21, 2006 4:46 PM
Subject: Re: Perl SOAP::Lite and Apache Axis2 interoperability
To: axis-user@...
Your SOAP envelope has specified the wrong namespace. It should be:
"".
Your SOAP message specifies:
xmlns:soap=""
That's the namespace for the SOAP extension for WSDL.
You'll have to check with the Perl folks to figure out how to fix it.
Anne
On 8/21/06, Kinichiro Inoguchi <ingc1968@...> wrote:
> Aleksey,
>
> I'm using Active Perl 5.6 within Windows XP.
> And SOAP::Lite version is 1.47. (I checked Lite.pm file.)
>
> And I'm using Axis2 nightly build war distribution
> with JDK1.4.2 and tomcat5.0.28.
>
> I hope this info helps you.
>
> Regards,
> kinichiro
>
> --- Aleksey Serba <aserba@...> wrote:
>
> > Kinichiro,
> >
> > > Your script worked in my environment,
> > > and I could see result.
> > > $VAR1 = 'Hello I am Axis2 version service , My version is
> > #axisVersion#
> > > #today#';
> >
> > Hmm, strange.. Thus the problem is in environment.
> > What version of SOAP::Lite module and axis2 dist do you use?
> >
> > Anyway, this is a good info for me - i'll dig into different
> > versions.
> >
> > Thanks
> >
> > Aleksey
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: axis-user-unsubscribe@...
> > For additional commands, e-mail: axis-user-help@...
> >
> >
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: axis-user-unsubscribe@...
> For additional commands, e-mail: axis-user-help@...
>
>
---------------------------------------------------------------------
To unsubscribe, e-mail: axis-user-unsubscribe@...
For additional commands, e-mail: axis-user-help@...
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/5560?l=1 | CC-MAIN-2015-40 | refinedweb | 273 | 63.25 |
Class simple name is the name between
class keyword and
{.
When we refer to a class by its simple name, the compiler looks for that class declaration in the same package where the referring class is.
We can use full name to reference a class as follows.
com.java2s.Dog aDog;
The general syntax specifying access-level for a class is
<access level modifier>class <class name> { // Body of the class }
There are only two valid values for <access level modifier> in a class declaration:
No value is known as package-level access. A class with package-level access can be accessed only within the package in which it has been declared.
Class with public access level modifier can be accessed from any package in the application.
package com.java2s; public class Dog { } | http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0020__Java_Class_Access_Level.htm | CC-MAIN-2017-22 | refinedweb | 133 | 61.67 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Hello all,
I'm experiencing trouble connecting to a Novell NetStorage webdav
server. I installed the last version (1.0.2) of davfs2 on my Ubuntu
6.06, but still cannot get the connexion.
Here is the transcription of a typical session :
> xavier@...:~/davfs2-1.0.2$ sudo mount.davfs /media/dav
>
> Please enter the username to authenticate with server
>
> or hit enter for none.
> Username: robin0
>
> Please enter the password to authenticate robin0 with server
>
> or hit enter for none.
> mount.davfs: Authentication with server or proxy failed.
> Look up the log files for details.
Neon version is 0.25.5 (official from Ubuntu) and is located in /usr.
So : where can I find the log files ?
I also tried the version of davfs2 bundled with Ubuntu (0.2.8) and I get
a slightly different output, as it asks me if I agree to connect to a
server whose identity cannot be verified. I ask 'yes' and get a '401
Authorization Required' error. I know the server certificate is outdated
(KDE webdav also complains for that), but version 1.0.2 doesn't even
asks me if I want to connect anyway.
I found an "old" discussion about probably the same problem :
And also, a site has a page about that issue :
Visibly, it is a problem of cookies, removed from libneon-0.25.
I tried to apply their solution unsuccessfully (a pid-lock problem).
I'm not a programmer, but I would be really happy if I could help
resolving this issue.
Greetings,
Xavier Robin
Hello Xavier,
it looks like Novell is doing some non-standard authentication like
HTTP-Authentication combined with cookies. But as long as the connection
is not secured, HTTP-authentication is not allowed by the standard. The
RFC demands that servers must support Digest Authentication in this
case. (Ther is a discussion about the use of cookies in the webdav
working group:)
But I am not really sure, what NetStorage is doing.
To get it running soon you might follow the advice in. It looks quite reasonable to me.
If you want to apply this patch to davfs2-1.0.2 you mitght add to file
webdav.c:
In section
/* Private global variables */
/*==========================*/
add:
#if NE_VERSION_MINOR == 24
ne_cookie_cache *loretta = NULL;
#endif
At the end of function dav_init_webdav()
add:
#if NE_VERSION_MINOR == 24
loretta = ne_calloc(sizeof *loretta);
ne_cookie_register(session, loretta);
#endif
If this works for you, please file a bug report against Novell
NetStorage, saying you want it to work with either Digest-Authentication
or plain HTTP-Authentication over TLS, according to RFC 2518, 17.1
Authentication of Clients.
To investigate what is really going on, you might help with this:
- the log messages from davfs2. Logs are in /var/log/. In which file you
will find the messages from davfs2 depends on the distribution. You may
look for messages, syslog, daemon.log, ?.
- you will get a lot of debug messages if you configure davfs2-v1 with
--enable-debug=secrets. But be sure to remove your username and password
from this messages before sending them to me.
- the log entries from the server might be interesting too.
- if you start the connection as plain text HTTP ("http://...";)
recording the traffic with ethereal might be helpful.
- As you use "http://..."; but there are also complaints about
certificates, NetStorages seems to upgrade from plain HTTP to TLS. You
might use an "https://..."-url instead, so davfs2 will use TLS from the
start. What will happen then?
Greetings
Werner
Werner Baumann a =E9crit :
> Hello Xavier,
>=20
> it looks like Novell is doing some non-standard authentication like=20
> HTTP-Authentication combined with cookies. But as long as the connectio=
n=20
> is not secured, HTTP-authentication is not allowed by the standard. The=
=20
> RFC demands that servers must support Digest Authentication in this=20
> case. (Ther is a discussion about the use of cookies in the webdav=20
> working group:=20
>)
>=20
> But I am not really sure, what NetStorage is doing.
Ok, actually it was an error on my part for the http:.
But I just tried with KDE and http works as well.
> To get it running soon you might follow the advice in=20
>. It looks quite reasonable to me.
> If you want to apply this patch to davfs2-1.0.2 you mitght add to file=20
> webdav.c:
>
> (...)
>=20
> If this works for you, please file a bug report against Novell=20
> NetStorage, saying you want it to work with either Digest-Authenticatio=
n=20
> or plain HTTP-Authentication over TLS, according to RFC 2518, 17.1=20
> Authentication of Clients.
I modified the file, make clean, ./configure --enable-debug=3Dsecrets,=20
make and make install (actually I use checkinstall for easy=20
uninstalling, but it issues a make install).
But I still have the error :
> Accept certificate for this session? [y,N] y
> mount.davfs: Authentication with server or proxy failed.
> Look up the log files for details.
Davfs2 1.0.2 asks me for the certificate if I use https. And=20
authentification fails.
> - the log messages from davfs2. Logs are in /var/log/. In which file yo=
u=20
> will find the messages from davfs2 depends on the distribution. You may=
=20
> look for messages, syslog, daemon.log, ?.
>=20
> - you will get a lot of debug messages if you configure davfs2-v1 with=20
> --enable-debug=3Dsecrets. But be sure to remove your username and passw=
ord=20
> from this messages before sending them to me.
The content of syslog is in the attached file. Visibly, everything seems=20
to go right, but neon fails during the authentification. An exact copy=20
of it is in the debug log. It isn't very "verbose" on the exact error.
Version of neon is 0.25.5.dfsg-5.
I found a package neon-dbg in my package manager. I installed it,=20
reconfigured and built davfs2, but I got no more information.
Perhaps I should rebuild completely neon with a debug option to get=20
further informations ?
Nothing in user.log and in daemon.log or dmesg.
> - the log entries from the server might be interesting too.
Unfortunately I can't access to them :-(
> - if you start the connection as plain text HTTP ("http://...")=20
> recording the traffic with ethereal might be helpful.
I don't know exactly how it works, but I could capture the traffic. I=20
can see a 401 Authorization Required error, but the details and the=20
other packets are very obscure to me.
Unfortunately, I don't know exactly how to remove my password from this=20
binary file (I think it is inside...), so I don't dare to send it as is.
I'll try to understand it exactly tomorrow.
Thanks for your help !
Xavier
Xavier Robin a =E9crit :
> The content of syslog is in the attached file.
But I forgot to attach it.
Sorry for the annoyance. It is here
Hello Xavier,
it is definitely a problem with authentication between davfs2 and
NetStorage. You made sure, the password you found in the logs is
correct. So everything looks just as described in.
But the patch from this site (the one I sent you is essentially the
same) does not work with neon 0.25. The neon people have removed cookie
support, as they think their code was outdated and they hadn't time to
write a new (I assume they do not think it is important, too). So you
have to use neon 0.24. To do this:
Get the package and unpack.
Configure with options --enable-shared and --with-ssl.
Please examine the output of configure. It should end like this:
Install prefix: /usr/local
Compiler: gcc
XML Parser: libxml 2.6.16
SSL library: OpenSSL (0.9.7 or later)
zlib support: found in -lz
Build libraries: Shared=yes, Static=yes
"Install prefix: /usr/local" will make sure it does not interfere with
the neon package from your distribution. So other applications like
konqueror may continue to use the neon lib from the distribution.
Do "make" and "make install".
Now you can configure the patched version of davfs2 again. Without an
options it should use the neon library from /usr/local. Check the
output, it should end like this:
Install prefix: /usr/local
Compiler: gcc
neon library: library in /usr/local (0.24.7)
After "make" and "make install" it should now work with neon 0.24.7 and
the patch should have enabled cookie support.
It is best, if you removed the debian/ubuntu-davfs2-package before.
Hint to capture the traffic:
If you are using ethereal, there is a tool "Analyze/Follow TCP-Stream".
It will take all the packets that belong to the same TCP-connection as
the highlighted packet, and display the content (e.g. HTTP messages) in
a separate window. There you can mark the text with the mouse and paste
it into some editor.
Please note: This will apply a filter on your packages so ethereal will
only show the packages that match this filter. If you want to see the
other packets again, you have to remove the filter.
Note 2: A conversation between davfs2 and a WebDAV server may consist of
more than one TCP stream.
401 Authentication required:
It is OK to get this message. But davfs2 should then send another
request including some credentials (hopefully not readable) and this
should succeed.
If you capture the traffic, please look for Cookie headers.
Greetings
Werner
Hello again.
Neon compiled and installed fine.
But I get errors when compiling davfs2.
./configure works well :
> Using configuration for building davfs2 1.0.2:
>=20
> Install prefix: /usr/local
> Compiler: gcc
> neon library: library in /usr/local (0.24.7)
>=20
> Now run 'make' to compile davfs2
But make aborts with this error :
> src/webdav.c:120: erreur: syntax error before =AB*=BB token
> src/webdav.c:120: attention : type defaults to =ABint=BB in declaration=
of =ABloretta=BBsrc/webdav.c:120: attention : la d=E9finition de donn=E9=
es n'a pas de type ni de classe de stockage
> src/webdav.c: Dans la fonction =ABdav_init_webdav=BB :
> src/webdav.c:252: attention : implicit declaration of function =ABne_co=
okie_register=BB
> make: *** [src/webdav.o] Erreur 1
It seems linked to the parts of code I added to webdav.c. I copied=20
exactly the code you gave me, and I think put it at the correct lines=85
For ethereal, I'll look at it soon.
Thanks,
Xavier
Hello Xavier,
sorry for this. I forgott to include the necessary neon header for cookies.
Please add line
#include <ne_cookies.h>
at the beginning of webdav.c (best between #include <ne_basic.h> and
#include <ne_dates.h>, so it will be properly in alphabetical order ;-) ).
Greetings
Werner
Hello Xavier,
there is another detail I forgott about.
In order davfs2 can find your neon library in /usr/local/lib you need an
symbolic link in /usr/lib (there should be a better way, but I don't
know of it).
Please create (as root) in /usr/lib a symbolic link:
ln -s /usr/local/lib/libneon.so.24.0.7 libneon.so.24
Of course "/usr/local/lib/libneon.so.24.0.7" might be slightly different
for the library you compiled.
You may check with
ldd mount.davfs
whether all libraries for davfs2 are found.
Greetings
Werner | http://sourceforge.net/p/dav/mailman/message/955071/ | CC-MAIN-2015-18 | refinedweb | 1,919 | 68.36 |
mastercoin MasterCoin quotientcoin_xqn.png 2014-11 161 XQN Quotient 2014-11-11 1% 61 1618033 1,618,033 coins (proof-of-work), 1,618% PoS interest rate, 618 coin cap on stake reward, 61 second block spacing, 161 blocks to maturity, Min Coin Age 16 hours, 1% Premine (16,180 coins) - so the developers are well fed ico-pow-engagement Dissemination via purchase, then by proof of work and then by giveaway ico-engagement Dissemination via ico, then media engagement profitcoin_pfc.png 2014-11 PFC ProfitCoin 2014-11-11 20% 7 300 premined 10000000 PFC is the only one cryptocurrency with guaranteed profit of 10 - 20% per month, Coins total: 10 000 000, 7 PFC per block, Block generation: 5 minutes, About 2 000 PFC generating every day, Premine 20%, We're planning to start accepting ProfitCoins for hashrate purchases in hashprofit.com service before the end of November (approximately 25th of November), Post address 121 Amathountos Avenue, Agios Tychonas 4532, Limassol, Cyprus. nist6 Folklore combiner: composed of 6 algorithms approved by the National Institute of Standards and Techology (BLAKE, BMW, Grøstl, JH, Keccak, Skein) c11 Chained 11 hash functions vanilla Vanillacoin protocol. momsha momentum plus a round of SHA2-512 ico-freebie Dissemination via purchase and by giveaway 3s Folklore combiner: composed of SHAvite, SIMD and Skein. unknown Dissemination scheme is not known or undeclared czarcoin_czr.png 2014-10 CZR Czarcoin 2014-10-21 100% 60 premined 100000000000 Algorithm: Scrypt, ~100% Mined with fair distribution to the public, 100,000,000,000 CZR Issued with a target of 1% annual inflation. commodity coin for czarparty fimkrypto FimKrypto, Whirlpool Whirlpool realpay Realpay 2nd gen realpay RealPay intercoin Intercoin 2nd gen, details opaque intercoin InterCoin upcoin_up.png 2014-04 UP UpCoin 2014-04-03 (0.1, 4.1) 60 300000 Algo: Fresh, Block Time: 120 seconds, Block diff retarget: every block, Block reward: , 1st block premine of 14100 coins, block 2 - 100 : 0 UP for fair launch, blocks 101 - 7200 : 500 UP, blocks 7200 - 14000 : 250 UP, blocks 1400 - 28000: 125 UP, Total PoW coins 7'050'000 UP + premine, Stake interest: 15%, Min Stake Age: 4 hours, Max stake age 12 hours. growcoin_grow.png 2014-11 GROW Growcoin 2014-11-09 1005000 60 2587800 JH NIST 2nd round candidate dividend Dividend paid proportionally to owners of online wallets lioncoin2_lion.png 1 block 2014-11 10 60 LION2 Lioncoin 2014-11-23 0.5% 60 6400000 X15 Algo., Abbreviation - Lion, Total coins - about 6.4 Million., 60 seconds block ., Difficulty adjusts every block., 10 confirmations for transactions., 60 confirmations for mined blocks., 7% POS Annual , POS starts at block 7 000, POW Block 7 000 (about 5 DAYS)., POW BLOCK REWARD Block 1-200 -- 1 COIN, Block 201-400 -- 10 COINS, Block 401-600 -- 100 COINS, Block 601-7000 -- 1 000 COINS , , P2P: 20010 RPC:20020, 0.5% Premine ripple Ripple, 2nd gen ripple Ripple thiamine Folklore combiner: (BLAKE-512, SHA2-512, BMW-512, CubeHash-384, Whirlpool Enhanced, Groestl-512, JH-512, SHA3-512, Skein-512, Luffa-512, Tiger, Tiger v2, RipeMD-160, Cubehash-512, Panama, Shavite-512, SIMD-512, Echo-512, Fugue-512, Hamsi-512, Shabal-512, HavalType5-256, Standard Whirlpool) poc-pow Proof of chain followed by Proof of Work. nfd NFD NXT Fair Distribution poc-registration Distribution by proof of chain and by registration. stellar Stellar protocol. NeoScrypt NeoScrypt x12 Chain of 12 hash functions dcrypt SHA256 made difficult to parallelise via using leapfrog hashing pob Proof-of-Work plus Proof-of-Stake augmented by Proof-of-Burn nostrum Nostrum engagement Dissemination via engagement arenacoin_arn.png 2014-11 ARN ArenaCoin 2014-11-08 100% 30 1000000 ZeroCoin implementation pure POS, 500% annual interest, encrypted messaging. 100% of the supply is available for the initial coin offer, the price will be determined at the end of the ico based on total investments. Initial coin offer will be closed Tuesday 11 / 11 / 2014, the Coin will be distributed to all investors the same day. roulette Folklore combiner: first block is hashed with SHA2, then 16 rounds of hashing are performed, each round with randomly chosen algorithm from the set of 16 hashing algorithms: (BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, SHA3, Luffa, SHA2, Shabal, SHAvite-3, SIMD, Skein, Whirlpool). x17 Folklore combiner: composed of BLAKE, BMW, Grøstl, JH, Keccak, Skein, Luffa, CubeHash, SHAvite, SIMD, ECHO, HAMSI, Fugue, Shabal, Whirlpool, SHA2big, Haval 5-pass nxt NXT 2nd gen electric_ele.png 2014-01 ELE Electric 2014-01-02 60 superseded 10000000000 bitbaycoin_bay.png 2014-11 10 50 BAY Bitbay 2014-11-07 100% 64 closed source 1000000000 Coin Specs (POS 2.0):, Block time: 64 seconds, Nominal stake interest: 1% annually, Min transaction fee: 0.0001 Bay, Confirmations: 10, maturity: 50, Min stake age: 8 hours, no max age, Total: 1 Billion, Ticker: BAY, Smart Contracts , Advanced Smart Contracts: Machine to Machine advanced contract fulfillment (IOT: Internet of Things), Multi-Sig, Joint Accounts, Muti-Coin Wallet Features, Decentralized marketplace (A new approach different from OpenBazaar), POS 2.0 (Theoretical “51% attack” is nearly impossible), Convert BitBay into a hedging coin (price will be pegged to RnB), Windows, Mac, Linux, and Mobile Staking Wallets (Android, iPhone, Windows), In-wallet resources (Chat, Block Explorer, Gaming, Community)., Proof of Developer (POD) and Crypto Certify ratings in progress. bitcoin2 Bitcoin2 protocol. airdrop-engagement Dissemination first via airdrop, then also media engagement cfc-pow Dissemination via crowdfund campaign and by proof of work scrypt-n Adaptive N-factor scrypt colbertcoin_clbc.png 2014-04 CC Colbertcoin 2014-04-12 unknown Protection scheme is not known or undeclared twe Chain of 11 hash algorithms faucet Dissemination via faucet poq Dissemination via performing in-game quests. Grøstl NIST 2nd round candidate engagement-pow Dissemination via engagement, then pow. glowshares_gsx.png 2014-11 50 GSX Glowshares 2014-11-10 60 1000000 Proof of Stake, Block Time: 1 Minute, Interest Rate: 7% annually, Minimum Coin Age: 8 hours, Maximum Coin age: 45 days, Block Maturity: 50 blocks yescrypt BOOST Y-scrypt node Node, rename of NXTL, NXT Lite adcoin_xad.png 2014-11 XAD Adcoin 2014-11-11 60 9800000 ADCoin(XAD) is a Scrypt based cryptocurrency with 60 seconds block time. As you can see from the title, ADCoin will be the game changer of cryptocurrency and online advertising industry. poc Dissemination via proof of chain boinc Based on research contribution to BOINC clearinghouse ClearingHouse por Protection solely by Proof-of-Resource boardcoin_board.png 2014-11 120 BOARD Boardcoin 2014-11-11 100% 60 12000000 We will have marketplaces on every forum where members can buy or sell stuff for Boardcoin. We attempt to back 1 boardcoin = $0.1 using our digital assets such as advertisements and classified listings. ico-freebie-pow Distribution via ICO, giveaway and Proof of Work. Dagger Dagger 2nd gen randpaulcoin_rpcd.png 2014-11 RPCD RandPaulCoin 2014-11-05 100% 20 Bitshares X copy 1000000 As per the BitShares community consensus, we will initialize the Genesis block with 10% AGS, 10% PTS — and as stated earlier, 80% for owners of Ron Paul Coin. dpos Protection solely by Divined Proof-of-Stake bitshares BitShares radix 2nd gen, no details, reportedly in development radix RADIX, in permadevelopment scrypt scrypt, based on tarsnap kind Goods or services in kind interest Interest pow-pos Proof-of-Stake, augmented by a fixed-duration Proof-of-Work phase hexioncoin_hxc.png 1 block 2014-11 HXC Hexion 2014-11-13 347.2 60 1000000 SHA256D, 1,000,000 Total PoW Coins, 347.2 HXC per block, 1 minute block intervals, 1 block difficulty adjustments, 2880 PoW Blocks ( 2 days ), PoS min age 1 hr - max age unlimited, PoS interest 10,000% Yearly entropybitcoin_ebt.png 2014-01 EBT Entropybit 2014-01-14 1% 1000 30 200000000 (see below). Scrypt-based cryptocoin with 1000 coins per block and 250 million total coins. ('h', 100000, 'b') nxthorizon Next Horizon noodlyappendagecoin_ndl.png 2014-01 NDL NoodlyAppendage scrypt-n-f Uses a fixed N-factor to avoid anticipated network issues fudcoin_fud.png 2014-11 60 FUD FUDcoin 2014-11-13 240 20001561 Ticker Symbol: FUD, Algo: SHA-256, Block Time: 4 minutes (240 seconds), Block Reward: Smoothly decreasing exponential (see chart below), Total PoW Blocks: 43,200 (~120 days), Money Supply: PoW 20,001,561 FUD, PoS Rate: 35%, PoS Min / Max Age: 6 days / 18 days, Mining/Minting Spendable: 60 blocks, Addresses begin with “F” novel Novel, singleton acro ACRO, in permadevelopment pow-ico Dissemination via proof of work and by purchase cfc Dissemination via crowdfund campaign pow Consensus mechanism obtained from Proof-of-Work x15 Folklore combiner. Chain of 15 hash functions nzh NHZ NXT New Horizon 2nd gen fico-ico-airdrop Dissemination by fixed ICO, then ICO, followed by airdrop bitcoin Bitcoin chance Gaming-specific meta-protcol doublecoin DoubleCoin, facetious clubbed Dissemination to membership Twister Twister poc-sale Dissemination via proof-of-chain and sale Luffa Luffa bcrypt blowfish-crypt scrypt-n-m Scrypt-n using M modifier, ASIC-hostile x11 Chain of 11 hash functions edgecoin_edge.png 25 blocks 2015-06 50 EDGE Edge 2015-06-07 0 100 60 1000000 EDGE: A I2P-Centric Gaming Crypto-Currency v.1.0 fixcoin_fix.png 2014-04 FIX Fixcoin block range coin rewards and 27 million total coins. ivugeoevolutioncoin_iec.png 2015-04 IEC IvugeoEvolutionCoin 2015-04-17 7.6 60 100000000 [IEC] IvugeoEvolutionCoin [SCRYPT][POW] - Gold Backed, Scrypt Algorithm, Proof of Work, Block time 1 minute, 7.6 coins per block, 100 million total coins. Every single coin has certain value in gold.To avoid reducing the value of our COIN it must always be balanced gold standard. Now we have 60% of the coins covered in gold(we have our gold in Gold wire - it's used in Electronics) with worth 36 million euros. Gold standard will be always increased with how many coins will be mined. Since we are trying to diversify risk, next increasing of gold standart will be in Troy ounces psilocybincoin_psy.png 2015-05 PSY Psilocybin 2015-05-27 60 2141400 [PSY] Psilocybin | SporeNet | No ICO/Premine | SHA256d | NINJA!, PoW Reward Structure:, Blocks 1 - 250 = 1500, Blocks 251 - 500 = 1000, Blocks 501 - 750 = 800, Blocks 751 - 1,000 = 600, Blocks 1001 - 1,250 = 400, Blocks 1251 - 1,500 = 400, Blocks 1501 - 1,750 = 600, Blocks 1751 - 2,000 = 800, Blocks 2001 - 2,250 = 1000, Blocks 2251 - 2,500 = 1500 positroncoin_tron.png 2015-04 100 TRON Positron 2015-04-11 0 90 2000000 Positron has a rapidly changing Proof of Work and Proof of Stake system, and it's first goal is to test an evolving POS system and observe it's effects on the blockchain and in the altcoin marketplace. Positron is built on the foundation of Bitcoin, PPCoin, Novacoin, and BitcoinDark, with a modified POS system., Short: TRON, Algorithm SHA256, 90 Seconds Per Block, 100 Blocks to Confirm, 20MB Blocksize, Proof of Work rejected after block 3400, Total approx potential coins from POW: 900,000, lower with mixed POS at block 2700. 3 days mining., Dynamic POS last approx one month. End Block 21,000, Total Approx Coins from first month of Dynamic POS: 614,000, Total Approx Total Coins 1.5m. After block 21,000 POS goes to approx 9% yearly. icecoin_ice.png 2013-06 ICE IceCoin 2013-06-11 DoA, relaunched by someone else and dead again. distrocoin_distro.png 1 block 2014-05 DISTRO Distrocoin 2014-05-12 0.7% 50 100000000 Stopped 5/15/2014. Sources removed. ('h', 1000000, 'b') bluechip_bch.png 2014-06 10 30 BCH Bluechip 2014-06-11 1% 60 15000000 BlueChip Specifications:, X13 algorithm, 15 million coins mined during the PoW phase, 1% Premine, 7 Day PoW, 15 % PoS, Confirmations - 10, Maturity - 30, PoW BlueChip Block Reward Structure:, 2-100: 1 BlueChip, 101-3100: 2,000 BlueChips (6mil ~2 days), 3101-8100: 1,000 BlueChips (5mil ~3.5 days), 8101-10,100: 2,000 BlueChips (4mil ~1.5 days mybroscoin_mbc.png 24 hours 2014-01 MBC Mybrocoin 2014-01-18 0 100 120 premined 100000000 re-target every 2016 blocks, 100 coins per block, and 100 million coins. Mario Bros Coin - MarioBrosCoin Abbreviation: MBC Algorithm: SCRYPT Date Founded: 1/18/2014 Total Coins: 100 Million Confirm Per Transaction: Re-Target Time: 24 Hours Block Time: 120 Seconds Block Reward: 100 Coins Halved 840k coins Diff Adjustment: Premine:187 Blocks. ('h', 840000, 'c') whycoin_why.png 2015-10 50 WHY Whycoin 2015-10-15 100% 90 50000000000 Type: Proof of Stake (PoS), Block Time: 90 seconds, Stake Interest: 5%, Minimum Coin Age: 8 hours, Block Maturity: 50 blocks, 50 Billion pre-mined with safety in mind that if all 50 Billion were to be staked at 20% interest, it would be a 12 billion rise yearly. This has been stated to not be the case, but on the off-chance it were consistently staked it would run through safely for a bit under 3 years. If later on this coin were to achieve widespread adoption, an upgrade and refactoring of the system could take place with a hard fork to push it beyond these limitations. firecoin_fec.png 2014-01 FEC Firecoin orangecoin_oc.png 2014-05 OC OrangeCoin re-target every 1 block, 5000 coins per block, and 200 million total coins. emiratescoin_uae.png 2014-05 UAE EmiratesCoin 651000000 930000000 1) About UAE United Arab Emirates - the land of great prospects. Bitcoin isn't banned here. Moreover, here are installed ATMs which give an opportunity to buy Bitcoins. Such possibility significantly promotes the development of cryptocurrency. The number of rich citizens and tourists in the country is great. National cryptocurrency can attract wealthy Arab Investors to the EmiratesCoin as well, as to the Bitcoin. It allows to strengthen and even increase the Bitcoin's rate. We are planing to distribute the coins to the population of the EmiratesCoin to catch Arab investor's attention. This operation will be carried out this summer. Our currency is created for direct exchange for Bitcoin. Trades will be able to exchange EmiratesCoin and vice versa right in our purse using exchange BTC/UAE, development of which we will begin in the near future. United Arab Emirates is a new state, which has a great stocks of natural resources, such as oil and natural gas. Every fifty-fifth country citizen, according to statistics, is a millionaire. Gross domestic product is about 384,196 billion dollars. So, citizens of this country are always ready to go on invests in perspective economic spheres, one of which is a cryptocurrency exchange. So Implementation of the national cryptocurrency could be a new breakthrough in the economic of UAE. 2) Specifications Total coins: 930,000,000 Premine: 651,000,000 IPO and Bounties: 23,250,000 For UAE population: 627,000,000 Avaible for mining: 279,000,000 PoW:X11 DGW 60 second block times Block Rewards 1-300,000 500UAE 300,001-600,000 300UAE 600,001-900,000 100UAE 900,001-1,000,000 90UAE 3) Airdrop AirDrop will be held in 6 stages. Each resident will receive 60 coins a half years. The first stage will be held the first of June and here will be given 15 coins. The following steps will be carried out everyfour months with reduction in the number of coins in each stage by 2. (1st stageâ15 coins, 2ndâ13..., 6th stageâ5 coins). Coins can get any UAE citizens that have Emirates Identity Card. We have links with some private companies in the UAE that have an opportunity to conduct verification by the identification card number of UAE citizen. This will allow us to avoid fraud by unscrupulous participants of our event. Here are five simple steps to getting EmiratesCoin for UAE residents: 1) Name card's holder should proceed to the section of our website, called AirDrop. 2) On the next page, the user must enter his identification number (ID). 3) If the ID, after checking carried us, belongs to the citizen of region Arab Emirates, the user will be transferred to the next page. 4) On the new page, the user will be offered a choice: to authenticate through a personal page on Facebook, or via cell phone number. At the appropriate place will be sent a confirmation code that should be entered in a special field on our web site. 5) Upon successful passing of the last stage, the user should enter his EmiratesCoin purse number in the window that appears. If this wallet doesn't exist then the new unique number wallet is created for user. In the near future purses released for mobile devices. At this stage the process of receiving the coins to the user is ending. 4)IPO Your investment will serve as a support for EmiratesCoins. All the money will go to the improving equipment, ensuring stable operation of the servers, advertisement, creating an online store and exchange. All of this will consolidate UAE coin on the cryptocurrency exchange as one of the most perspective and reliable coins. Total coins for IPO: 23,000,000 Price: 0.0000025 UAE/BTC IPO Wallet: How to invest? Click on the link: Step 1: send BTC to 1BzgQ3sRNABgSHcxairjEtXfLBwZQBKCQ4 Step 2: Enter your bitcointalk name(if you have it) in "Name". Step 3: Enter your REAL e-mail in "Email address". Step 4: Paste your TXID in "Transaction ID(TXID)". Step 5: Click "Send". 5) Extra Info Wallet preview: Video: Bounties: Spanish translation - 5,000 uae Chinise translation - 10,000 uae Russian translation - 10,000 uae German translation - 5,000 uae Italian translation - 5,000 uae Fisrt Exchange - 50,000 uae Second Exchange - 50,000 uae First Mining Pool - 35,000 uae Second Mining Pool - 35,000 uae Fisrt p2p Pool - 20,000 uae Second p2p Pool - 20,000 uae megcoin_meg.png 2014-05 MEG Megcoin 2014-05-03 -1 eticoin_etc.png 2014-03 ETC Eticoin 2014-03-01 60 13500000 Posted on March 1, 2014, issue special coins E is the total four-year total 13.5 million, of which 7.2 million the first year, 2,000 a day, the second year of 3.6 million, 1,000 a day, the first 1.8 million years, 500 a day, in the fourth year 900 000, 250 a day. The average daily total allocation for 24 blocks, each one hour to produce a block and equal number of each block of the mine pool. All mining machine operator force a mining exploration focus, this block is decrypted when open, then the block is automatically assigned to all mineral pools involved in mining ore machines which, based on how much is allocated to each respective mining machine the ability to run integrated computing competence. coinmarketscoin_jbs.png 1 block 2014-09 10 500 JBS Coinmarketscoin 2014-09-01 60 rebrand to jumbucks 7750000 Specifications,., Ticker symbol: JBS (1 Jebus. 0.00000001 JBS = 1 Baby Jebus. Plural is Jebii), Proof of work, Algo: scrypt, Block reward:, 1-120: 1 JBS (Fair launch), 120-31000: 250 JBS, no halving, Max height: 31000 (after this network will not accept PoW), Max supply after PoW ends: ~7.75 million JBS sproutcoin_sprt.png 2015-06 SPRT Sprouts 2015-06-30 150 -1 Sprouts - The Entry level Cryptocurrency., Hybrid - PoW/PoS, PoW Algo - SHA256D, PoS Reward - 5% 5Days, BlockReward - Depends on Difficulty, Blocktarget - 2.5 minutes sjwcoin_sjw.png 10 hours 2015-06 SJW SJWcoin 2015-06-20 5000 60 tacocoin clone 50000000 SJWcoin.com, you better check that privilege, 5K blocks every minute, 10 Hr Re-Target, Merge-Minable, Not Racist, Reddit tipbot (/u/sjwcointipbot), block-explorer , Hates misoginysts ('h', 50000, 'b') ponycoin_pony.png 1 hr 2015-07 PONY Ponycoin 2015-07-22 60 60 108000000 Ponycoin is built off of the Bitcoin 0.8 wallet. turbocoin_xtb.png 2014-01 XTB Turbocoin candlecoin_cd.png 2015-11 CD Candlecoin 2015-11-22 5% 60 30000000 CANDLECOIN, CD, Algo : SHA256d , PoS 30 % Annual , Blocktime - ~60 seconds , RPCPORT : 38876, P2PPORT : 38877 , Min. Stake Age : 3 Hours , Max. Stake Age : 90 Days, 30 Million CD honorcoin_xhc.png 2014-06 80 XHC Honorcoin 2014-06-04 2% distributed 1620 20 21000000 Algorithm: X11 | 100% PoS (After PoW), Total Coins [PoW]: 21,000,000, Block Times: 20 seconds, PoW Blocks: 12960 | 3 Days, Block Reward: 1620 coins, Blocks <101: 1 coin, Coin Maturity: 80 blocks, Annual Interest: 5%, Min Coin Age: 6 hours , Coin Age Maturity: Unlimited, Free Distribution: 2% | 420,000 coins axiocoin_axio.png 2014-06 4 50 AXIO Axiocoin 2014-06-29 60 100000000 soundbit_snd.png 1 block DGW 2014-07 SND Soundbit 2014-07-14 200000 120 -1 Super secure hashing algorithm: 15 rounds of scientific hashing functions (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue, shabal, whirlpool), Block reward is controlled by moore's law: 2222222/(((Difficulty+2600)/9)^2), GPU/CPU only mining - no ASIC!!, Block generation: 2 minutes, Difficulty Retargets every block using Dark Gravity Wave, Est. ~7M Coins in 2015, ~13M in 2020, ~23M in 2030, Anonymous blockchain using DarkSend technology (Based on CoinJoin): Beta Testing, PREMINE 200k coins for IPO and network testing. This is less than one percent premine! shoecoin_shoe.png 2014-07 100 SHOE Shoecoin 2014-07-16 yes 500 60 closed source 2500000 Specifications, Algorithm: X15, Total coins: 2,500,000, Proof of Stake Reward: 11, Block Time: 60 seconds, Coin Maturity: 100 blocks, Min Coin Age: 3 day , Min Tx Fee: 0.00000001 SHOE, Premine: yes, Block Rewards, 500 SHOE, PoW: 5000 Blocks xenoncoin_xec.png 2014-09 XEC Xenoncoin 2014-09-11 0.5% 20 60 100000000 We are releasing XenonCoin as a Proof of Work only coin in the hopes of building a secure blockchain and giving something back to the miners who have invested big money in rigs only to see it all go to waste on PoS coins. Xenoncoin follows Myriad's implementation of Multi-PoW and we hope to work closely with not just the community but other Developers looking to make advancements with Multi-PoW coins. We do have further annoucements to make regarding ongoing development support and we will be looking to test some new features shortly after launch., Algorithim: Multi PoW. (Scrypt, SHA256D, Qubit, Skein or Myriad-Groestl), Total Coins: 100 Million, Starting Subsidy: 20 XeC Per Block, Minimum Subsidy: reward will never be lower than 1.25 XeC, Block Halving: Block rewards will halve every 2102400 blocks (around 2 years), Ticker: XeC, RPCPort: 16669, P2Port: 16668, Premine: 0.5% ('h', 2102400, 'b') gsccoin_gsc.png 2014-04 GSC GSCcoin Scrypt dime_dime.png 2013-12 DIME Dimecoin re-target every 65536 sec, 1024 coins per block, and 460 billion total coins. sapiencecoin_xai.png 2014-11 112 XAI Sapience 2014-11-19 60 1123581 The total number of coins in Sapience is 1,123,580, based on the Fibonacci sequence., Proof-of-stake interest rate is 11% per year., Coins mature after 112 blocks. florincoin_flo.png 90 blocks 2013-06 FLO FlorinCoin 2013-06-17 0 100 40 160000000 First coin with transaction comments Florin Coin - FlorinCoin Abbreviation: FLO Algorithm: SCRYPT Date Founded: 6/17/2013 Total Coins: 160 Million Confirm Per Transaction: Re-Target Time: 90 Blocks Block Time: 40 Seconds Block Reward: 100 FLO, halving every 800,000 blocks Diff Adjustment: Premine: None. ('h', 800000, 'b') luckysevenscoin_l7s.png 7 minutes 2015-08 77 L7S LuckySevens 2015-08-10 22.85% 77 77777777 Algo: Scrypt, Pre-Mine: 17 777 777 to cover investments, Total Coins: 77 777 777, Block Time: 77 secounds, Coin Maturity: 77 Blocks, Retargeting: 7 minutes fuzzballcoin_fuzz.png 2015-09 FUZZ Fuzzballs 2015-09-20 0 500 30 1000000 No premine, no ICO., Specs, Algo: Scrypt, Ticker: FUZZ, Block Target: 64 seconds, POW:, Block > 500 FUZZ, Halves every 50000 blocks ('h', 50000, 'b') sunnysideupcoin_ssu.png KGW 2015-11 SSU Sunnysideupcoin 2015-11-26 10% 50000 600 1000000000 Coin Name: sunnysideupcoin , Coin Abbreviation: SSU , Coin Type: Pure PoW , Hashing Algorithm: Scrypt , Dificulty Retargeting Algorithm: Kimoto Gravity Well, Time Between Blocks (in seconds): 600 , Block Reward Type: Simple , Block Reward: 50,000 , The block reward will cut in half every 2.28 Months , Premine: Yes , Premine Amount: 10% , Total Coins: 1,000,000,000 ('h', 2.28, 'm') netbitscoin_nbs.png 2015-04 NBS Netbits 2015-04-27 1.9% 4000 120 1500000000 SHA256, POW, TOTAL COINS: 1,500,000,000, Time per block: 2Min, Reward per block: 4,000, block halves every 200,000 blocks, premine: 1.9% ('h', 200000, 'b') rublebitcoin_rubit.png 10 2015-11 120 RUBIT RubleBit 2015-11-10 4% 40 60 100000000 NAME: RubleBit, Short Name: RUBIT, algorythm: Scrypt, coin supply: 100000000, Developers safe: 4000000, coin mining available: 100000000, block generation time: 60 seconds, half reward: 1 000 000 blocks, block max size: 8 Mb, block reward: 40 RUBIT, start difficulty: 0.00024414, change difficulty: 10 blocks, coinbase maturity: 120 blocks ('h', 1000000, 'b') capitalcoin_cptl.png 2014-05 CPTL CapitalCoin 0 100000000 Launch Date May 01 2014 11:59 PM EST Total Coins 100 000 000 CPTL Premine 0 Blocks Block Reward 1 000 CPTL Block Time 30 Seconds Transaction Time 120 Seconds Difficulty Retarget Every Block Algorithm Scrypt PoS Annual Interest 10% Minimum Stake Age 12 Hours Maximum Stage Age 2 000 Days electronicbenefittransfercoin_ebt.png 2014-01 90 EBT ElectronicBenefitTransfer 2014-01-28 1% 1000 30 250000000 Welcome to the FUTURE of Electronic Benefit Transfer! EBT Coin! Do you think you missed out on the BTC train? Well think again! Now there is assistance for the crypto impaired, EBT is here for you less fortunate crypto holders so get on board! For most of its history, the Food Stamp Program used paper denominated stamps or coupons worth US$1 (brown), $5 (blue), and $10 (green). In the late 1990s, the food-stamp program was revamped, and.” What if in the future “card” is replaced with “coin”? Could cryptographic currency be the future of electronic benefit programs?, Coin Information: Scrypt, 250 million coins, 30 second block time, PoW Reward: 1,000 halving every 100k blocks - final subsidy of 1 coin after block 1,100,000, PoS Reward: Begins on day 30, mature at day 90, 1% (2.5m) pre-mine for giveaways/bounties (see below) ('h', 100000, 'b') sexcoin_sxc.png 8 hours 2013-05 6 SXC Sexcoin 2013-05-28 0 100 60 premined 250000000 JunkCoin clone, overtaken. Sex Coin - SexCoin Abbreviation: SXC Algorithm: SCRYPT Date Founded: 5/28/2013 Total Coins: 250 Million Confirm Per Transaction: 6 Per Transaction Re-Target Time: 8 Hours Block Time: 1 Minute Block Reward: 100 coins per block, halved every 600,000 block Diff Adjustment: Premine: Premined first 2 blocks for bounties. ('h', 600000, 'b') euphoriacoin_eup.png 2 blocks 2014-03 EUP Euphoriacoin 2014-03-13 5% 48000 60 11000000000 Algorithm - euphoria(modified scrypt-n) - gives great performance with nvidia cards too., Max Coins - 11 Billion, Initial Block Reward - 48000, Block Reward Halving - Every 80,000 Blocks, Block Time - 60 seconds., Difficulty Retarget - 2 Blocks, Premine - 5% for the IPO ('h', 80000, 'b') hardforkcoin_hfc.png 1 block 2014-08 HFC Hardforkcoin 2014-08-01 0.25% 237 30 50000000 Instead of deploying complicated, CPU-friendly algorithms, we use use simple GPU and ASIC friendly algorithms. And as soon as an ASIC is on the verge of being made, we simply change the algorithm by automated hard forking, cause it's so easy to implement it for us regardless of how popular the coin is. Specs: 30 Seconds block Block reward 237 Difficulty retarget every block. Premined 0.25% (address will be posted soon). Blake Constant 237 coins per block 500 million coins. sovereign_sov.png 2014-05 SOV Sovereign 6750000000 jixiangcoin_jxc.png 2014-01 JXC Jixiangcoin 2014-01-15 mozzsharecoin_mls.png 48 hours 2014-08 MLS Mozzshare 2014-08-05 60 210000000 Algorithm: HEFTY1, Proof: POW, Total coins: 210 million, Block time: 60 seconds, Ripening time of latest block: 48h brothercoin_brc.png 2014-06 BRC BrotherCoin 2014-06-10 60 100000000 X11 Algo, Progressive Difficulty Adjustment, TOTAL AMOUNT one hundred million, BLOCK REWARDS, 60 seconds block time , TOTAL one hundred million aryacoin_arya.png 2014-06 ARYA Aryacoin 2014-06-05 100 60 7500000 PROOF OF WORK:, Block Reward: 100 , Block 1: IPO/SHARE coins + 3 shares for bounties, Block 2 till 150:0 coins per block, (Block 1 - 100 will be mined for test), Block 151: 100 coins , Block 7000: 70 coins, Block 12000: 40 coins, Coins from PoW: 2.2/2.5 million approximately, Block Reward: 100 , Block Time: 60 sec, Diff retarget: Every block, PoW Distribution: 10 days, PROOF OF STAKE:, PoS interest: 20%, PoS block: 12000, Supply: 5 million, Minimum age: 24 hours, Max age: 30 days, Coin maturity: 50 block covencoin_cov.png 30 minutes 2015-04 10 COV Coven 2015-04-18 90 33000 Algrithm SHA256, Retarget 90s, POS/POW, Min Stake age 1h, Max Stake age 30d, Maturity 10 confirmations, Specs:, blocks 1-100 : 0, Blocks 101-200: 50, blocks 200-400:45, blocks 400-600 :40, blocks 600-800 30, blocks 800-1000 20, blocks 1000-1200 :10, Supply 33k currencyoflightcoin_cura.png 2015-11 CURA CurrencyofLight 2015-11-17 100% 0 60 closed source 444444444444444 CURA are issued at a fixed rate of 671 million units per day (671,000,000 miles per hour is the speed of light). CURA uses a proof of labour algorithm where people are rewarded for positive contribution to society. CURA uses Multichain Bitcoin privilege based technology for zero hour accountability. multichain Multichain protocol. cinnicoin_cinni.png 2014-04 CINNI Cinnicoin 868 coins per block and 15 million total coins. cryptospotscoin_cs.png 10 mins 2015-02 5 CS CryptoSpots 2015-02-28 100% 673 60 service layer 60000000 Virtual PoW, ~673 CS per block, ~1000000 CS monthly distributed , ~12000000 CS distributed in 1 year(total supply for PoW), POS is 7%, POS Maturity is 1 hour bernankoin_bek.png 2013-12 BEK Bernankoin 2013-12-17 satire -1 itcoin_xit.png 1 block 2015-05 120 XIT ITcoin 2015-05-15 16.5% 60 13000000 POW & POS, Algorithm: X11, Ticker: XIT, Total coin supply: 13 000 000, Block time: 60 seconds, Coinbase maturity: 120 blocks, Difficulty retargeting after every block , POW, Blocks 1-200 - reward 0 coins to guarantee a fair launch for all miners, Blocks 200-43400 - reward 100 coins - approximately 30 days of mining, Blocks 43400-86600 - reward 50 coins - approximately 30 days of mining, Total POW coins: 6 480 000 , POW phase will end at block 86600 and POS will begin., POS, Blocks 86600-129800 - reward 100 coins - approximately 30 days, Blocks 129800-655400 - reward 0.1 coins - approximately 1 year, Total POW and POS coins combined: 10 852 560, Premine: 2 147 440 = ~16,5 % of the coinbase., 0,5% of the coinbase will used for promotions and rewards., 1% of the coinbase will be distributed for free to the community. 0.2% of the coins will be given out every, month through our website. You will need to register to our forums to participate. Everyone will be identified, with the registration, e-mail address, ip-address and finally in our chat in the wallet. This way we will ensure a fair distribution., The registration for the giveaway will start on the first day of the month (june) and coins will be distributed on the last day of the month., This process will be repeated during the next 5 months., 15% of the coinbase will be sold publicly to investors starting now and ending next thursday 18.00 GMT. sapientacoin_sap.png 1 block 2014-06 SAP Sapienta 2014-06-23 2% 30 180 premature 12000000 X11 - POW, 180 secs x block, 30 coin x block, Retarget every block with KGW, 12,000,000 Total Supply, Premine 2% rootcoin_root.png 2014-07 ROOT Rootcoin 2014-07-28 99 300 3000000 algorithm: scrypt (community voting), blockreward: 99 ROOT, total coins: 3 million coins, blocktime: 5 minutes, PoS reward: 3%, rpcport: 27377, p2pport:28388, qrcode support, POW-CUTOFF: block 3700, , +100 blocks were instamined at the offical pool., about 380 blocks were instamined through solominers AFTER launch, and before the official pool started hashing. thats a total of 38k ROOT. , The codebase of ROOT was fork from another scrypt-coin, and the GetNextTargetRequired(), the function which settles the difficully,, reset the first 450 blocks with the starting diff. leading the first 450 blocks, to be mined superfast. - as this was topic for some FUD/troll posts, i just, added it to the OP to make it public. pleiadescoin_leia.png 2014-08 LEIA Pleiadescoin 2014-08-12 0 60 20000000 Tickername: LEIA, PoW Algorithm: SHA2-384, Mining TX Reward: 20 LEIA, The first 10000 mining tx as a fee of 2 LEIA, miner gets only 18 LEIA, Fees: 0.1% transaction fee, min 0.00001 LEIA, Total coins: 20,000,000 LEIA, Private message system, BuildIn CPU miner smartchipscoin_sc.png 2015-09 4 SC Smartchips 2015-09-14 1000000 60 rebrand of IsotopeCoin 1826000 Currency name: Smartchips, Currency symbol: SC, Algo: sha256d, Total coins in PoW: 130000 SC, Total coins in DPOS: 280000 SC circa, Coins for Initial Boost Crowdfunding: 1 Million, Total coins after IBC + PoW + PoS: 1826000 SC (circa), Confirmations for block maturity: 100, Block time: 60 seconds, Confirmations for TX: 4 , RPC port: 25391 , P2P port: 29398 , Minimum stake age: 4 hours, From old XTP thread. octocoin_888.png 888 seconds 2014-04 888 OctoCoin 88888888 Scrypt POW. 88,888,888 Total Coins. 88 second block targets, Re-targets every 888 seconds. Re-target calculated using a 40 block sample and .25 damping imperialcoin_imp.png 2014-03 IMP ImperialCoin 1020000000 SHA256 algo 1,024,000,000 coins to be issued Halves every 120400 blocks INITIAL 1024 block reward RPC Port 10241 Port 10240 Block Explorer is UP - Check Forum! wearesatoshicoin_was.png 30 seconds / 2 blocks 2014-04 WAS Wearesatoshi 100000 social comment 40000000000 ('r') euphoriacoin_euph.png 2014-11 100 EUPH Euphoriacoin 2014-11-20 2% 60 10000000 Coin Name: EUPHORIA, Coin Ticker: EUPH, Total Coins: 10 Mil, Algo: X13 POW / POS, Premine: 2%, Block Time: 1 min, POS 4%, Block Rewards, 0-100 -- Premine and 1 reward. , 101-3001 -- 500 EUPH, 3001 -- 6000 -- 1000 EUPH, 6001 -- 9000 -- 250 EUPH, 9001 -- 12000 -- 125 EUPH, 12001 -- 1500 -- 500 EUPH, Coin Maturity -- 100 Blocks ohmygodcoin_omg.png 2014-12 90 OMG OhMyGod 2014-12-03 1.4% 60 2150000 TICKER: OMG, ALGO: X11 (enjoy low gpu temperatures), PoW+PoS, PoS interest: 55% (YES, 55% annual interest!), PoS will start at block 9560, 50 blocks before end of PoW to ensure smooth transaction, Coin Maturity: 90 blocks with 2 hours Min Age, MAX SUPPLY: 2.150.000 , Pre-mine: 30k OMG ( 1,4 % Of total supply), 10k of pre-mine will be distribuited as bounties ( see bounties section), Wanna instamine? Not really, first 60 blocks got ZERO rewards!, First 26 block are already mined,first 10 for pre-mine plus another 26 with ZERO reward to check networ stability., Block structure will change every 1k blocks after block 60: 400-300-200-150-120-100-80-100-150-250, PoW will end in about 2 weeks. redoakcoin_roc.png 1.5 days 2014-09 ROC RedOakCoin 2014-09-14 129.53 150 72000000 RedOakCoin, Standard Scrypt Algorithm, 2.5 Min., Transaction Time, 1.5 Day(s) Re-Target Time, 72 Million Max Subsidy, 129.53 *ROC Reward (Initial Inflationary Term) nerdycoin_nerdc.png SafeGW 2015-05 6 40 NERDC Nerdycoin 2015-05-14 0 0.6 180 clone of URO 10000000 NERDC] NERDYCOIN, Currency Specifications, Algorithm: X11 PoW, Target Block Time: 3 minutes, Block Reward: 0.6 NERDC, Difficulty Retargeting Algorithm: Safe Gravity Well, Genesis Block Premine: 0 Uro, Standard Mined Block Maturity: 40 Confirmations, Recommended Transaction Confirmations: 6, Default RPC Port: 36348, Release date from current UROCOIN Block 175000 neocoin_nec.png 2013-07 NEC Neocoin 新币 2013-07-26 0 21 100 80000000 NVC-like, with messages. Multiple failed relaunches until finaly ok. Neo Coin - NeoCoin Abbreviation: NEC Algorithm: SCRYPT Date Founded: 7/26/2013 Total Coins: 80 Million Confirm Per Transaction: Re-Target Time: Block Time: 1.67 Minutes Block Reward: 21 Coins Diff Adjustment: Premine:. 1% NVCS, SA 30/90, Coins 3.8 secondscoin_sec.png 2013-12 SEC Secondscoin re-target every 1 hour, 60 coins per block, and infinity total coins. Mixed hashing clone. mlkcoin_mlk.png 2014-05 MLK MLKCoin 3.5 coins per block and 2 million total coins. baobeicoin_bbc.png 2013-12 BBC Baobeicoin 2013-12-20 0 96000000 12 coins per block and 96 million total coins. Baby Coin - BabyCoin Abbreviation: BBC Algorithm: SCRYPT Date Founded: 12/20/2013 Total Coins: 96 Million Confirm Per Transaction: Re-Target Time: Block Time: Block Reward: Diff Adjustment: Premine:. nuggets_nug.png 2013-07 NUG Nuggets 2013-07-15 yes re-target every 21 blocks and 49 coins per block. Premined LTC clone with random superblocks and constant base reward. pentablake Folklore combiner: five rounds of BLAKE 512. cloudtokenscoin_cloud.png 2014-11 CLOUD Cloudtokens 2014-11-19 100% 60 2000000 File hosting and cloud platform. 100% PoS via Escrowed ICO, Total Coins: 2 Millions CloudTokens, Min Stake Age: 8 hours, Max Stake age: 12 hours, Variable PoS Stake interest:, First Year 15%, After first year 5.5% basecoin_bac.png 2013-09 BAC Basecoin re-target every block, POW / POS, 1 coins per block, and 6 million total coins. Specifications:, POS+POW, Each normal block has 1 Basecoin, 1 minute block time, Daily generation is 1440 coins, basecoin is really a rare coin., Difficulty retargets every block, mining payout will be stable all over the lifecycle of Basecoin, Expected total mined coins will be 6000000 Basecoins, 4 confirmations for transaction, 50 confirmations for minted blocks, Support transaction message, Diff start at 0.01, The default ports are 21206(Connect) and 21207(RPC). perfectcoin_perc.png 3 days 2014-02 PERC Perfectcoin 40 100 100000000 memorycoin_mmc.png 2013-12 120 MMC MemoryCoin 记忆币 2013-12-25 variable 120 10000000 Algorithm: Momentum + SHA512 Block Time: 6 minutes Port: 1968 Codebase: ProtoShares 0.8.6 (Bitcoin 0.8.5) Block reward: 280 MMC, 5% reduction every 1680 blocks Total Coins: 10 Million coins in the first 2 years, 2% inflation thereafter Difficulty Retargeting: Every block with the Kimoto “Gravity Well” xencoin_xnc.png 9 hours 2013-06 6 XNC XenCoin 2013-06-19 0 (200, (10000, 10000, 2000)) 20 2100000000 Premined LTC clone Xen Coin - XenCoin Abbreviation: XNC Algorithm: SCRYPT Date Founded: 6/19/2013 Total Coins: 2.1 Billion Confirm Per Transaction: 6 Blocks Re-Target Time: 9 Hours Block Time: 20 Seconds Block Reward: First 10,000 blocks reward is 2000. After 10,001 block - 200 coins per block, halves every 3 years (4,665,600 blocks) Diff Adjustment: 9 Hours Premine:. ('h', 3, 'y', 4665600, 'b') talkcoin_tac.png 2014-05 TAC Talkcoin 2014-05-17 experimental Uses coins as address mechanism doubloon_dbl.png 2013-05 DBL Doubloon 2013-05-15 8000000 6.77 coins per block and 8 million total coins. LTC clone. 0 starting diff. NEW! - Chest Version 0.9.0 Out now! SOURCE : Windows Binaries: WEBSITE: BLOCK EXPLORER: BLOCK CRAWLER: CHAT: GAMBLING: Blackjack! FAUCET: POOLS:. lgbtqoincoin_lgbtq.png KGW 2015-08 LGBTQ LGBTQoin 2015-08-18 25000 50 300 50000000 LGBTQoin [LGBTQ], SPECIFICATIONS, Algorithm: X11, Coin Type: Pure PoW, Total Coins: 50,000,000, Block Time: 5 mins, Block Reward: 50, Difficulty Retargeting Algorithm: KGW, Block Halving Rate: 500,000 ('h', 500000, 'b') genstakecoin_gen.png 2015-06 10 20 GEN Genstake 2015-06-03 1000 1 60 15000000 Block time: 60 seconds, Min transaction fee: 0.0001, Confirmations: 10, maturity: 20, Max Coins: 15,000,000, Premine: 1000 GEN , P2P port: 9341, RPC port: 9342, addnodes (hardcoded):, node1.genstake.com, node2.genstake.com, Proof-Of-Work, Algo: Scrypt, Block Reward: 1 GEN, Halves Annually, Max Height: None, Proof-Of-Stake, Block Reward: 20 GEN, Halves Annually, Min age: 1 hour, Max age: 30 days ('h', 1, 'y') bitchips_chp.png 2011-11 CHP BitChips 2011-11-23 groupcoin_grp.png 2014-03 GRP GroupCoin 2014-03-15 experimental Experimental precursor to DevCoin Groupcoin is the precursor to devcoin, in which we tested some ideas to fall back on in case we could not do what we wanted devcoin to do. daffybuckscoin_dfy.png 2015-04 DFY DaffyBucks 2015-04-27 1951937 1000 600 4151937 DaffyBucks is scrypt with a coin total of 4,171,937, 1,900,000 of which will be sold through an ICO to fund the project, and 2.2 million which will be rewarded to miners through a long Proof of Work (PoW) mining period. The ICO funds will be used to fund development on the project, and then also allow non-miners a way to join the community. To further secure the network and reward the community for their support, all mined and purchased coins will can be staked at a 20% annual rate. , Name: DaffyBucks, Ticker: DFY, Coin Type: PoW/PoS Hybrid, Algorithm: Scrypt, Block time: 600 seconds, Block Reward: 1000 DFY, Halving Rate: 1100 blocks, Premine Amount: 1951937, Total Coins: 4151937, ICO Total: 1900000, Reserved for giveaways and bounties:51937, Yearly Interest %: 20.00, Minimum Stake Age (in days): 15, Maximum Stake Age (in days): 90 (h,1100,b) noahcoin_noah.png 2014-07 10 80 NOAH NOAHcoin 2014-07-10 1% varying 120 40000000 Algo: X13, Total coin: 40 Million , Block time: 120 seconds, Confirmations on Transactions: 10, Maturity: 80, 1% Premine, 3% IPO, POW REWARD, Block 1: Premine + IPO, Block 2-100: 1 NOAH, Block 101 - 10000: 100 NOAH, Block 10001 - 20000: 150 NOAH, Block 20001 - 30000: 200 NOAH, Block 30001 - 40000: 150 NOAH, Block 40001 - 50000: 100 NOAH, Block 50001 -100000: 200 NOAH, POS : 10% after block 100 000 POS yacoin_yac.png 2013-05 YAC Yacoin 雅币 2013-05-05 0 25 60 2000000000 Multi-hash based cryptocoin with dynamic re-targeting, 25 coins per block, and 2 billion total coins. Your Alternative Currency. NovaCoin fork, with modified scrypt hashing Ya Coin - YaCoin Abbreviation: YAC Algorithm: SCRYPT Date Founded:5/6/2013 Total Coins: 2 Billion Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 25 Coins Diff Adjustment: Premine: None. Novacoin-based. 5% yearly (0.4%), SA 30/90 joincoin_j.png 1 block Digishield 2014-08 J Joincoin 2014-08-13 1400000 45 2800000 Joincoin, Exchange symbol: (J), Total coins: 2,800,000, Coin distribution: 1,400,000 through mining/ 1,400,000 through Pre-sale, Proof of Work, Block Time: 45 seconds , Difficulty Adjustment using: Digishield , Anonymous Blockchain: Provided by ToR (already implemented) goldbar_gdb.png 2014-04 GDB GoldBars 400000.0 racecoin_race.png 2 minutes 2014-08 10 30 RACE Racecoin 2014-08-11 1% 30 9679800 Racecoin is a PoW/PoS-based cryptocurrency, Algorithm: X11, PoW/PoS, Symbol: RACE, Total from POW Mining: 9 679 800, From To Reward, 1 100 200, 101 300 500, 301 1,000 800, 1,001 2,000 2,000, 2,001 3,000 3,500, 3,001 4,000 2,000, 4,001 5,000 1,500, Time per Block: 30 s, Proof-of-Stake interest will begin being generated after block 500., Premine: 1%, Stake interest: 5% per year, Proof-of-Stake Minimum Age: 2 hours, Proof-of-Stake Maximum Age: No Limit, Difficulty readjusts every 2 min, Confirmations on Transactions: 10, CoinBase Maturity: 30 sambacoin_smb.png 2014-03 SMB Sambacoin 892000000.0 re-target every 4 hours, 400 coins per block, and 891 million total coins. skeincoin_skc.png 2013-11 SKC Skeincoin 2013-11-01 0 32 120 17000000 Skein-SHA2 based cryptocoin with POW, 32 coins per block, and 17 million total coins. Skein Coin - Skeincoin Abbreviation: SKN Algorithm: SHA-256 Date Founded: 11/1/2013 Total Coins: 17 Million Confirm Per Transaction: Re-Target Time: Block Time: 2 Minutes Block Reward: 32 SKC, halving every year Diff Adjustment: Premine: None. ('h') newbiecoin_newb.png 2014-06 NEWB Newbiecoin 2014-06-01 -1 Name: NewbieCoin, Symbol: NEWB, Launched time : 2014-06-01 06:45:46 (UTC), Newbiecoin (NEWB) is a protocol, coin, and client used to run decentralized applications such as creating and backing decentralized crowdfunding projects , betting on dice rolls and other games in a decentralized casino, etc. The protocol is built on top of the Bitcoin blockchain. Coins were created by burning Bitcoins during a creative “Twin Proof of Burn” period., Newbiecoin is modified from ChanceCoin's open source with new features such as TWIN-POB,POS,new bet rule and new decentralized applications such as decentralized crowdfunding . pyongyangcoin_xpg.png 2015-01 XPG PyongyangCoin 2015-01-11 0 600 500 This is clearly the best coin of 2015! Let's keep it running for the lulz. Max Coins: 500, Specs: Same as BTC hamstercoin_ham.png 2013-12 HAM Hamstercoin 1000 60 diodecoin_dio.png 2014-11 DIO Diode 2014-11-20 1% 1500 60 4500000 Diode has a simple, effective, easy and fast distribution that spans over 3000 Blocks. Hashing will be done with the X13 Algorithm, and PoS will start at block 2800., Algorithm: X13, Block Reward: 1500 DIO, Block Time: 60 Seconds, Max Diode: 4,500,000 DIO, PoW Phase: 3000 Blocks, PoS Starts: Block 2800, PoS Interest: 5%, Pre-mine: 1% (45k DIO) alibacoin_abc.png 2014-02 5 30 ABC Alibacoin 2014-02-07 30 20000000 Scrypt Algo, Total Coins: 20000000, Block Times: 30 Seconds, Confirmations Mined Blocks: 30, Transaction Conformations: 5, BLOCK REWARDS, 0-80640 100 Coins, 80641-161280 50, 161281-241920 25 Coins, 241921-322560 10 Coins, 322561-403200 5 Coins, 403201-483840 2 Coins, 483841-564480 1 Coins, 564481-645120 0.5 Coins, 645121- 725760 0.2 Coins, 725761+ 0.1 Coin, reducing chicoin_chi.png 2013-12 CHI Chicoin a re-target every 5,472, 50 coins per block, and 77 million total coins. x11coin_xc.png 2014-05 XC X11coin 2014-05-08 90 33300000 POS interest: 3.33% per year, PoSmin 8 hours, PoSmax 30 Days. Abandoned and re-announced as Xcurrency. analcoin_anal.png 1 block 2015-03 ANAL AnaLCoin 2015-03-13 6 40 600000 AnaLCoin - The Coin for Analysts !, Algo: SHA256, Abbreviation: ANAL , Max number of coins ( Including POS phase) : 600,000, Timing of block (in seconds) : 40, Difficulty Retarget every block, Coins per Block (during pow phase) :6, Total POW: 300,000 coins, Block number when POW ends : 50000, POS interest per year : 2,25%, Min stake age : 24 hours, Max stake age : 30 days, In-Wallet Trading ! bitleu_btl.png 2014-04 BTL Bitleu 30 blocks premined 5000000000 Bitleu is intended to be ASIC and multipool resistant using a unique combination of hashing and mixing functions, Scrypt Jane. Low block rewards that slowly increase over the first year and an increasing of the N-factory will provide a more attractive mining environment for a wider variety of participants and discourage the larger farm operations to over mine and flood the exchanges with Bitleu. There will be 5,000,000,000 Bitleu created (5 billion). with 50% premined. The 50% of premined coins will be distributed as follows: 0.1% of total coins to the coin founder to offset expenses incurred and to pay for continued expenses including paymens for advertising, coding/application developement, consultations with lawyers and other related expenses. 4.9% of total coins to the Bitleu Foundation to cover operating expenses and used for investments and promoting the Bitleu ideals. A full transparency policy will ensure fair play and promote community participation. 45% of total coins will be distributed to Romanians as free grants to encourage their participation in the Bitleu community. Any of the remaining Bitleu will be surrendered to the Bitleu Foundation and used for operating expenses, investments and developing the Bitleu network. 5,000,000,000 Billion total coins 2.5 Billion coins Premined 2.5 Billion coins for mining in 16 years Block Time: 60 Seconds Confirmations Mined Blocks: 30 Transaction Confirmations: 5 Port: 22641 R bitcoinfastcoin_bcf.png 2014-12 60 BCF BitCoinFast 2014-12-04 1000000 10 30 premined 33000000 Algorithm: Scrypt [POW/POS], Total initial supply: 33 Million, POS=25% Annually, Block time: 30 seconds, 10 Coins per block | **(100 Coins per Block for first 24 HRS for early adopter)**, Block Maturity: 60 blocks, Pre-mine= 1 Million with 300k going to Cryptex Exchange for having it listed at launch., POS INFORMATION, Network-Stake- 25% year, Min Coin Age: 1 day (24h) instaminenuggetscoin_mine.png 33 hours 2015-02 3 MINE InstaMineNuggets 2015-02-25 3% 30 180 21649485 Coin Type: Litecoin Scrypt, Halving: 350000 Blocks, Initial Coins Per Block: 30 Coins, Target Spacing: 3 Minutes, Target Timespan: 33 Hours, Coinbase Maturity: 3 Blocks, Pre-Mine: 3% = 649485 Coins, Max Coinbase: 21000000 + 649485 = 21649485 Coins ('h', 350000, 'b') infinium8_inf8.png 2014-07 INF8 Infinium-8coin 2014-07-18 0 90 -1 LAUNCHED 7/18/2014 12:00 UTC, CryptoNote-based digital currency,. , Features, Untraceable payments, Unlinkable transactions, Blockchain analysis resistance, Adaptive parameters, Details in the CryptoNote white paper, Specifications, PoW algorithm: CryptoNight [1], Max supply: Infinite, Block reward: increasing with difficulty growth [2], Block time: 90 seconds, Difficulty: Retargets at every block, [1] CPU + GPU mining (about 1:1 performance for now). Memory-bound by design using AES encryption and several SHA-3 candidates. , [2]). Both practices are counterproductive. , Infinium-8 s the cryptocoin with blockreward increasing together with difficulty. , Block reward formula is a fixed point implementation of log2(difficulty). This means that block reward increases much slower than difficulty: for 1 coin to be added to block reward a difficulty must be increased twice. This rate looks to be reasonable, Block_reward = log2(difficulty) * 2^40(difficulty), Downloads , Win64: , Linux: , MacOS: , Source: , How to start mining, 1. Download or compile binaries, 2. Start infiniumd, 3. Wait until network is synchronized. You will be notified with “SYNCHRONIZED OK” message, 4. Run simplewallet, 5. Create a new wallet or open the existing one, 6. Type start_mining %number_of_threads%. , Example: start_mining 3, 6.* You can start mining from the daemon with start_mining %wallet_address% %number_of_threads% command. , Example: start_mining inf8U2kpouwiAr2cMLHsTqVijc 6, Pools , Xmining Mining Pool , - Only 1% commission;, - Most stable and powerful servers;, - Fast and responsive support;, - Based on the updated node-cryptonote-pool, Extreme Pool, - Fast Payouts directly to your private wallet., - Address Validation, - Variable Difficulty based on your individual CPU performance, - Verified Secure, - Servers hosted in our own Data Center, not hosted by an online 3rd party provider, Hashhot Pool ethereum_eth.png 2014-01 ETH Ethereum 2014-01-23 bitcoin125coin_btc12.png 1 block 2015-10 12 200 BTC12 Bitcoin12.5 2015-10-12 1250000 12.5 125 12500000 POW/POS Bitcoin12.5 (BTC12), Algo: QUARK, Reward: 12.5 Coins, Confirmations: 12 , maturity: 200, Diff algo: Digishield Retarget: Every block, Max amount: 12.500.000 BTC12.5, Premine for ICO 1250000 , Price Buy 0.000019 BTC 23,75 BTC, Unsold coin will be destroyed, Block time: 125 seconds, Last block PoW 15000, transaction fee: 0.0001, PoS Min. 7 Hour , Pos Max. Unlimited, Min stake confirmations: 450 nyancoin_nyan.png 2014-01 NYAN NyanCoin 337000000 thcoin_th.png 2014-06 10 30 TH THcoin 2014-06-06 0 30 10000000 Algorithm: X11 POW+POS, Coin name:bitcoin Th/s(THcoin), Short name: TH, Total coin: 10,000,000, POW REWARD, 500 - 4000, 2000 - 2000, 5000 - 1000, 10000 - 500, POW LAST BLOCK: 10000, Block time: 30 seconds, POS Min age: 10 minute, POS Max age: unlimited, Confirmations: 10, Maturity: 30, Stake interest 10%, PORT 12233, RPC 22233, NO IPO, NO PREMINE friendshipcoin_fsc.png 2014-03 FSC Friendshipcoin 37400000000.0 jennycoin_jny.png 1 block 2014-04 JNY JennyCoin yes 867.5309 60 868000000 Total Coins - 867530900 1 Minute Blocks Difficulty will change every block Algo - Scrypt Block Reward 867.5309 Blocks 1-10 1% Premine total of 8,675,309JNY Blocks 11-60 0 coins mined before launch to mature premine Blocks 61-100 10 coin block reward Blocks 101-500 50 coin block reward Blocks 501-1000 100 coin block reward Block 1001 867.5309 coin block reward bitgem_btg.png 2013-05 BTG Bitgem 2013-05-16 1500000 re-target each block, POW / POS, and less then 1 coins per block. NVC scarce clone. 0 starting diff Novacoin-based. 3% NVCS (0.2%), SA 30/90, Coins 0.023 quantum2_qtm2.png 2014-07 10 QTM2 Quantum2 2014-07-09 0 60 10000000 QTM2, Specifications, Pure Proof-of-Stake (PoS), Max coins: 10 million QTM, Transaction fee: 0 QTM, Min stake age: 30 minutes, no maximum age, Stake interest: 0%, Confirmations: 10 androidtokens_adt.png 2013-08 ADT Android token 66000000000 re-target every block, POS, 524,288 coins per block, and 180 billion total coins. Just a clone. Android Token is based off of Infinitecoin using scrypt as a proof of work scheme. Android Token also known as ADT is one of the fastest if not the fastest confirm rate coin out there. Features Proof of Stake 0.1%, 30 days/60 Max Weight Tx Fee = 0.1% 180,000,000,000 Max Coins Ever 524,000 Coins Per Block, Halving every 100,000 Blocks 30 Second Blocks Retarget every block, with 70 Confirms per block 6 Confirms per TX. worldtopcoin_wtc.png 2014-02 WTC WorldTopCoin 2014-02-05 60 CN-focused 30000000 jerkycoin_jky.png 2014-01 JKY Jerkycoin re-target every 300 blocks, 144 coins per block, and 76 million total coins. thinkandgrowrichcoin_tagr.png 1 block 2015-06 TAGR Thinkandgrowrich 2015-06-18 20000000 60 53000000 Algorithm: x15, Block time: 60 seconds, Difficulty retargets each block, Block Reward: , Block 1-2800 = 10,000, 2800-5600 = 5,000, 5600-8400 = 2,500, 8400-10,200 = 1,250, PoS interest: 7%, minimum age 8 hours, max unlimited, One coin is divisible down to 8 decimal places, First 20 million coins are for ICO, first 2000 blocks, Total POW coins: 32,000,000 coins, 32.0 million, Total coins ICO and PoW: 53.0 million chakracoin_ckc.png 1 block 2014-05 3 50 CKC Chakracoin 2014-05-06 60 2500000000 Pure PoS Coin. PoS Minimum Coin Age 1 day. Variable interest, stake (annualized rate) years: 1 - 30%, 2 - 20%, 3 - 10%, 4 - 5%, 5 - 2%, 6+- 1% annual stake amerocoin_amx.png 2014-02 AMX Amerocoin re-target every 2016 blocks, 25,000 coins per block, and 21 million total coins. leaguecoin_lol.png 2014-05 LOL LeagueCoin lioncoin_lion.png DGS 2014-06 3 200 LION LionCoin 2014-06-15 15 60 10000000 Algorithm: X11, Total Money Supply: 10,000,000 Million LION, Block Time: 60 seconds, Block Reward: 15 LION ( 2-200 1 LION reward), Difficulty Retarget: DigiShield, Coin Maturity: 200 blocks, Confirms: 3, NO PRIMINE & NO IPO softwarecoin_soft.png 2014-04 SOFT Softwarecoin truthcoin_csh.png 2015-08 CSH Truthcoin 2015-08-10 600 sidechained 21000000 simicoin_simi.png 2014-03 SIMI Simicoin 2014-03-19 50 300 21000000 cnote_cnote.png 2014-01 CNOTE C-Note 2014-01-08 re-target every 100 blocks, 100 coins per block, and no maximum total coins. chichicoin_uuc.png 2014-02 UUC ChiChi Coin 2014-02-12 83000000 flexiblecoin_flex.png 2014-08 FLEX Flexiblecoin 2014-08-25 1% 60 1000000 blackdragoncoin_bdc.png 2014-05 BDC Blackdragoncoin 2014-05-05 geistgeld_gg.png 2011-09 GG GeistGeld 2011-09-09 Experiment on BTC algorithm, how fast can it go? just for science. glaricoin_glc.png 2014-02 GLC Glaricoin 2014-02-14 100 60 11000000000 SHA256D, 11000000000 GLC Total, 100 GLC/Block, Halving after 100000 Blocks ('h', 100000, 'b') paimaibicoin_pmb.png 2014-04 3 30 PMB PaiMaiBi 2014-04-16 80000000 32 30 580000000 Auction currency, the English name PaiMaiBi, referred to PMB, Xi'an shares another company to build a network heavyweight virtual currency, coins and its users, C shares trading network go hand in hand. PMB total of 580 million, the official pre-dug 80 million for the subscription, the total output of the first year of mining in which the first month 11,040,000 2,760,000 1,380,000 rest of the second month after the 690,000 per month until 59 years later, exhausted . PMB using the now popular POS + POW mechanism, PMB generated every 30 seconds a confirmation block, mining 30 confirmed mature, trading three confirmed mature. 101-1000 blocks of ore pool test blocks, each containing two coins. 1001 After each containing 32 PMB, every 30 days production decreased 50%, each production was lower than 8 PMB, automatic correction for 8PMB, until all exhausted after 59 years ('r', 0.5, '30d') cleverhashcoin_chash.png 2014-10 30 CHASH CleverHash 2014-10-17 100% 0 60 123995 All (HASH) tokens have already been produced in the genesis block and are secured using multiple rounds of 13 different hashes, making (HASH) one of the securest and most sophisticated crypto-currency equivalents available today. Cleverhash “CHASH” the X13 PoW/PoS Cryptocurrency., Algorithm: X13 POW/POS, Ticker: CHASH, Current Version “3.0.0.0”, Max Coins: 123,995 CHASH, RPC Port: 28195, 30 Blocks to mature, Block Reward Schedule:, Block 1 is 123,995 CHASH, Block 2-500,000 are 0 CHASH trickycoin_trick.png 1 block 2015-03 10 30 TRICK TrickyCoin 2015-03-27 60 50000000 Name: TrickyCoin, Ticker: TRICK, Pow / Pos, x11, BlockTime: 60 Seconds, Retarget: Every Block, ICO: None, Pre-mine: 0%, Confirmations on Transactions: 10, CoinBase Maturity: 30 ledgercoin_ldr.png 2015-10 7 LDR Ledgercoin 2015-10-18 1000 0.0001 60 1000000 Type: POW/POS, Algo: Sha256d, Pow Rewards:, Block 1 = 1000, Block 2 Onwards = 0.0001 Per Block, PosMin Stake = 1.5 hours, Max Stake = 7 Days, Pos Rewards:Block 1 - 10,000 = 1500%, Block 10k - 20k = 150%, block 20k onwards = 15%, every 50k blocks there will be a 50coin reward until block 1mil , Annual POS = 15%, Confirmations: 7 givebackcoin_giv.png 2014-05 GIV Givebackcoin 2014-05-15 Scrypt/SHA256D/Qubit/Skein/Groestl Algos 611coin_sil.png 48 blocks 2015-11 SIL 611coin 2015-11-03 0.611 60 611000 Forked from Namecoin the Bitcoin protocol is augmented with 611 operations, to reserve, register and update names. In addition to DNS like entries, arbitrary name/value pairs are allowed and multiple namespaces will be available. This will include a personal handle namespace mapping handles to public keys and personal address data., algo sha256, 0.611 coins per block; but with constant reduction: on average half the coin value every 2^18 blocks., re-target every 48 blocks, merged mined with Bitcoin, proof-of-work ('h', 262144) girlcoin_girl.png 2014-04 GIRL GirlCoin <5% 10000000000 Block Time: 30 Seconds Difficulty Retarget Time: 1 hour Premine: under 5% for Miley Cyrus Rewards Block 1 â 200,000 : 1-50,000 GIRL Reward Block 200,001 â 300,000 : 1-40,000 GIRL Reward Block 300,001 â 400,000 : 1-20,000 GIRL Reward Block 400,001 â 500,000 : 1-10,000 GIRL Reward Block 500,001 â 500,000 : 1-5,000 GIRL Reward Block 600,001 â 700,000 : 1-2,500 GIRL Reward Block 700,001 â 800,000 : 1-1,250 GIRL Reward Block 800,001+ : 500 GIRL Reward eccoin_ecc.png 2014-04 ECC ECCoin 1000000000.0 praycoin_josh.png 1 block 2015-02 16 150 JOSH Praycoin 2015-02-13 0 400 30 satire 64000000 Algo: SHA256, Ticker: JOSH, Diff Retarget: Every Block, POW/POS 10k Blocks POW, then POS, Max number of Coins : 64,000,000, 30 Second Blocks, Coins per Block :400, Block number when PoW ends : 160,000, PoS Reward per year : 15%, Max stake age : No max, Min stake age : 24 hrs, 150 confirmations required for mined blocks, 16 confirmations required for transactions, No premine eurocoin_euc.png 400 blocks 2015-07 20 EUC Eurocoin 2015-07-01 0 50 120 20000000 Mining will start July 1st 2015 at Noon 12:00 UTC, No premine, Based on Bitcoin 0.9.3 SHA256 source, Block target: 2 minutes, Difficulty retargets every 400 blocks, Block reward: 50 EUC, halving every 200,000 blocks, Total coin supply: 20,000,000 EUC, Coinbase maturity: 20 blocks ('h', 200000, 'b') scrasic_scr.png 2014-03 SCR Scrasic 2014-03-20 p7coin_p7c.png KGW 2015-04 P7C P7Coin 2015-04-11 0 270 30 270000000 P7COIN P7C Algorithm: Scrypt, POW, Kimoto Gravity Well, Reward: 270 P7C, Block Time: 30 sec, Havling every : 500000 blocks, Total coins: 270000000 P7C, No pre/ipo/ico! ('h', 500000, 'b') caixcoin_caix.png 2014-05 CAIX CAIx 2014-05-09 15 relaunch 50400000 Startup Money Supply: ~ 1.6 Million CAIx, Stake Interest: 5% Annually, Confirmations: 10, Maturity: 500, Minimum Stake Age: 8 hours, Ports: RPC=1510 P2P=1511 diamondbackcoin_dbk.png 2014-08 DBK DiamondBack 2014-08-16 384 1040000 Symbol: DBK, Algorithm: CryptoNight, Block time: 384 seconds. Difficulty retargets each block. Total coins 1. 04 Million cryptofocuscoin_fcn.png 2015-05 3 10 FCN CryptoFocus 2015-05-07 1000000 60 220000 Algorithm: X11, Ticker: FCS, Block time: 60 sec, PoS Interest: 10%, min TX fee: 0.001, Tx confirmations:3, Confirmations:10, Total supply: XXX, Ico total: 1M chingelingcoin_chl.png DGW 1 block 2015-04 20 CHL Chingeling 2015-04-28 100% 60 1000000 a Merchant-based crypto-currency with a $1 Buy-Back-Guarantee & Internal Market-place in wallet. retrocoin83_r83.png 2015-08 8 R83 Retrocoin83 2015-08-31 25000000 83 60 swindle, incomplete source 83000000 RETROCOIN83 [SCRYPT] R83 viacoin_via.png 1 block 2014-07 VIA Viacoin 维尔币 2014-07-18 0 5 24 92000000 Welcome to Viacoin!,., We do not consider it safe or viable long-term, to build out potentially billion dollar ecosystems that rely on an upstream service we don’t have control over. The current attempts at embedding services in the Bitcoin protocol are constantly having to design workarounds as Bitcoin plays “whack-a-mole” with them., Viacoin is designed from the ground up to be both a digital currency and provide the backbone of ClearingHouse protocol, there is a 100% guarantee that ClearingHouse will always remain compatible with Viacoin., Furthermore Bitcoin is just too slow for these kinds of applications with 10 minute confirmation times which are often much longer (up to 1 hour). For decentralized exchange or shopping, that latency is simply not viable., Viacoin is blazingly fast, utilizing block targets of 24 seconds, it is 25 times faster than Bitcoin., UPDATE: You can learn more about the story of Viacoin and ClearingHouse here, Specifications:, Name: Viacoin, Symbol: VIA, Total supply: ~92,000,000 (92MM), Algorithm: scrypt (POW), Block reward: 5, Block time: 24 seconds, Difficulty retarget: every block, NO premine. NO instamine. Presale available (read about the presale below)., Viacoin presale is now closed! fastcoin_fst.png 2013-05 FST Fastcoin 2013-05-29 32 12 premined 165888000 re-target every hour, 32 coins per block, and 165.8 million total coins. LTC clone. 12 sec blocks. 0 starting diff. Fast Coin - FastCoin Abbreviation: FST Algorithm: SCRYPT Date Founded: 5/29/2013 Total Coins: 165.888 Million Confirm Per Transaction: Re-Target Time: Block Time: .2 Minutes Block Reward: 32 Coins Diff Adjustment. sydpakcoin_sdp.png 2 blocks 2015-08 5 SDP Sydpakcoin 2015-08-09 25000 180 -1 Sydpakcoin, SDP, Specifications, Algo : X13, Blocktime : 180 Seconds, Diff Adjust : every 2 Blocks, Proof of Stake : 2 % Per Year, Proof of Stake Min.Coin Age : 120 Minutes, Proof of Stake Max.Coin Age : Unlimited, Premine = 25000 Coins, Block 1 - 2400 = 50 Coins Reward , Block 2401 - 4800 = 25 Coins Reward , Block 4801 - 7200 = 12 Coins Reward , Block 7201 - 9600 = 6 Coins Reward , Block 9601 - 12000 = 3 Coins Reward , Last PoW Block : 12000, Total Block Reward during Proof of Work : 255400 gotfomocoin_gtfo.png 1 block 2015-05 GTFO Gotfomo? 2015-05-28 100000 60 200000 GotFomo? A Coin to Change The Universe Sick of giving your money away to people you don’t know? GET GOTFOMO?, SHA256 Algorithm, 60s block times, retarget every block, 100k Premine on block 2 (fair launch), 100-1000 blocks pays 100, 1000-2000 pays 50, 2k-3k pays 25, 3-4k pays 12, 4-5k pays 5, This will make ROUGHLY 200k , HIPOS will work like this:, It picks a random number 1-10, then between blocks, 0-1k it takes the random number x 100, 1-2k is random number x 25, 2-3k is random number x 12, 3-4k random number x 6 , 4-5k is random number x 3 , Then a variable rate based on the netweight after block 5k nigeriacoin_ngc.png 1 block Digishield 2014-03 NGC Nigeria Coin 47% burn 41900 120 locality 88000000000 Skein algorithm that runs cooler on GPUs and is 30% more energy efficient compared to Scrypt. It also dodges the Scrypt ASICs coming out for fairer initial distribution 2 minute block target (because anything less will have too many orphan blocks) 41900 coins per block (as suggested. it's just too funny to pass up. thank you otila) 88 billion total coins (we Nigerian's love the Chinese and the Chinese love 8's) Difficulty retargets every block (looking at a modified KGW algo) 3% dev premine for IPO, bounties, giveaways, development, support and maintenance, new feature developments etc. 47% inflation adjustment premine (NEW CONCEPT, we will destroy this premine via Proof of Burn to adjust for inflation created by PoW mining) orobit_oro.png 2014-02 ORO Orobit 50 coins per block, and 21 million total coins. ambercoin_amber.png 1 block 2014-08 7 128 AMBER Ambercoin 2014-08-17 50 120 premined 1000000000 Algorithm: X11, Total Coins: 1,000,000,000 (2,550,000 until the end of PoW, the rest in PoW/PoS hybrid age), PoW until Block 50,000, PoW/PoS hybrid after Block 50,000, Block Time: 2 minutes, Difficulty Retarget: every block, Confirmations on Transactions: 7, Confirmations on Mined Blocks: 128, Secure and Open Source, QR code support, POS INFORMATION, Stake Interest: 5% annually, Minimum Stake Age: 2 days, BLOCK REWARDS, Blocks 1 - 1000 = 100 AMBER, Blocks 1001+ = 50 AMBER minicoin_mini.png 2014-02 MINI Minicoin 1000000000.0 Quark-based cryptocoin with re-target every 20 blocks, block range coin rewards, and 1 billion total coins. cinnamoncoin_cin.png 2013-09 CIN Cinnamoncoin 0 27000000 a linear block re-target, POW / POS, 64 coins per block, and 27 million total coins. SPECIFICATIONS Scrypt Proof of Stake 5 Minute Block Target Linear Diff Retarget Block reward < 300 = 1 Coin to mitigate premine/insta-mine 1% proof of stake after 60 days of coin age 64 Coins Per Block, Halving every 2 years ~27,000,000 Million Proof of Work Coins over 10 years .01 Set Tx Fee 40 Confirms for Minted Coins to be spendable POW mining for 14 years 3 block transaction confirmation Public Key Bit: 28 PORTS RPC PORT: 19125 (test net 29125) P2PPORT: 19126 (test net 29126) . 1% yearly, SA 30/90 bigbullioncoin_big.png 2014-07 6 15 BIG BigBullioncoin 2014-07-29 1% 36 600 10000000 Specifications:, SHA-256, Ticker - BIG, 36 coins per block, Halving: 147,000 blocks, 10 minutes block targets, Block Maturity - 15, Transaction Confirmation - 6, 10 million total coins, 1% DEV Private PREMINE! - will be used for Promotion, Developing Tools Services and more ('h', 147000, 'b', ) predatorcoin_prdt.png 6 hours 2014-11 5 PRDT Predatorcoin 2014-11-14 1% 8250 60 1024650000 PREDATORCOIN is a rare, Scrypt Proof of Work only cryptocurrency. The maximum supply is 1.024.650.000 PRDT (coins) ever. Block target time is 1 minutes, which makes it very fast. PRDT uses KGW and a progressive difficulty adjustment to protect miners.Wallet Clone Scrypt code was used with the Litecoin., Coin specification :, Code clone from LitecoinD, PREDATORCOIN (PRDT) is only Scrypt, Max coin 1.024.650.000 PRDT , Halving 62100 blocks, reward block PRDT 8250 coins for Block, Target spacing 1 min, Target timespan 6 h, Diff retarget, Premine 1 %, Confirmation-maturity 5 blocks ('h', 62100, 'b') deafdollarscoin_deaf.png 168 hours 2014-11 10 DEAF DeafDollars 2014-11-16 10% 1 600 466667 Coin Type: Litecoin Scrypt, Halving: 210000 blocks, Initial Coins Per Block: 1 Coins, Target Spacing: 10 Min, Target Timespan: 168 H, Coinbase Maturity: 10 Blocks, Pre-Mine: 10 % Pre-Mine Wallet, Max Coinbase: 420000 + 46666.7 (10% Pre-Mine) ('h', 21000, 'b') adamantiumcoin_adm.png 20 minutes 2015-07 ADM Adamantium 2015-07-12 60 product support 1000000 Specifications: X11, POW TO POS, TOTAL POW: 10 000 Blocks, POW Reward: 100 ADM, POS Reward: 1 ADM, Block timing: 60 seconds, Difficulty retargeting: 20 Blocks, Stake age: 8 hours, BOOK ICO premine: 125k. ADM is a X11 POW to POS coin with a 1 week (10 000 x 60 second blocks) POW period of 100 coins per block. This gives a total of 1 million coins MAX since POS kicks in during the POW period and POS blocks are counted towards the POW period blocks. POS block reward is 1 ADM. darkdarkcoin_dark.png 2015-03 60 DARK Darkcoin 2015-03-25 0 240 rebrand of fudcoin 12908369 Ticker Symbol: DARK, Algo: SHA-256, Block Time: 4 minutes (240 seconds), Block Reward: Smoothly decreasing exponential (see chart below), Total PoW Blocks: 43,200 (~120 days), Money Supply: 12.8 M DARK, PoS Rate: 35%, PoS Min / Max Age: 6 days / 18 days, Mining/Minting Spendable: 60 blocks, Addresses begin with ìFî, Ability to create Stealth Addresses decreasing exponential riskcoin_risk.png 2015-02 RISK Riskcoin 2015-02-14 60 1000000 Blocks: 11520 BLOCKS, Block Time: 60 sec, Rewards: Block 1 : 75,000 RISK ICO COINS, Rewards: Block 2 - 2880: 10 RISK, Rewards: Block 2881 - 5760: 20 RISK, Rewards: Block 5761 - 11520: 15 RISK, Rewards: POS: 50% atomcoin_ato.png 2013-12 ATO Atomcoin re-target based on a moving average over 80 blocks, 1024 coins per block, and 100 million total coins. bastion Folklore combiner of 8 hashfns, one round each of (ECHO, Luffa, Fugue, Whirlpool, Shabal, Skein and Hamsi) including three random rounds plus an additional round of HEFTY-1. torrentcoin_tc.png 2014-06 TC Torrentcoin 2014-06-07 0.5% 150 8400000 Total Coins:84 Million Coins, Super secure hashing algorithm: 13 rounds of scientific hashing functions (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo,hamsi,fugue), Block reward is controlled by: 2222222/(((Difficulty+2600)/9)^2), CPU/GPU mining, Block generation: 2.5 minutes, Difficulty Retargets using Dark Gravity Wave, Premine:0.5% africanblackdiamond_abd.png 1 block 2014-05 ABD African Black Diamond 53000 1000000000 Scrypt, re-target 1 block, 53000 block reward, 1000000000 total coins. olympiccoin_oly.png 2014-02 OLY OlympicCoin 323000000.0 brigadecoin_brig.png 1 block 2014-06 4 30 BRIG Brigadecoin 2014-06-19 0 1000 30 2000000000 Brigadecoin is from a Team of Coin Developers which are Involved in the Development of lot of Cryptocoins.Brigadecoin is POW/POS Hybrid coin with X11 Algorithm and coin control feature, X11 Algo uses eleven hashing functions from the Blake algorithm to the Keccak algorithm making it very secure which really is needed for coins that do so well for CPU’s., Specifications:, Max Coins:2000000000 , Block time: 30sec, Block reward:1000 BRIG, Last POW block 500000 (300 Days of mining), Total POW coins: 500000000, Block retarget: every block, POS interest: 10% per year, Coins stake after 10 hours, Confirmations per transaction 4, Coins mature after 30 blocks, Premine: 0 (0%) daycoin_dac.png 2014-01 DAC Daycoin re-target every 2016 blocks, 50 coins per block, and 19.7 million total coins. spacebucks_sbx.png 2014-06 SBX Spacebucks 2014-06-12 0 1 180 1000000 3 Minute Block Times, 1 Coin Per Block, 1,000,000 Coins Total x15coin_x15coin.png DGW 2014-06 10 80 X15COIN X15Coin 2014-06-04 1% 1000 60 10000000 X15COIN uses the latest X15 Algorithm. X15 is the latest algorithm ,15 ROUNDS OF SCIENTIFIC HASHING (blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue, shabal, whirlpool). CPU or GPU mine X15COIN., Algo: X15, Total coin: 10 Million , Block time: 60 seconds, Difficulty Retargets:DGW, Decentralized MasterNode Network, Confirmations on Transactions: 10, Maturity: 80, 1% Premine -- For bounties, development., POW REWARD, Total Block: 10 000, Block 1- 10 000: 1 000 X15Coin, After block 10 000 POS will start ., 1% Annual PoS. bones_bones.png 2014-03 BONES Bones 3800000.0 re-target every 6 hours, 7-10 coins per block + difficulty bonus rewards, and 3.8 million total coins. buycoin_buy.png 2014-08 BUY Buycoin 2014-08-17 0.99% 99 99 99000000 esportscoin_esc.png 2014-07 ESC ESportsCoin 2014-07-23 0 30 1200000 26th July - First halving, PoW reward is now 42, 23th July - ESC launched successfully and smoothly, detailed developments time frame will be announced this weekend!, ESportsCoin, The ultimate solution of cryptocurrency for E-Sports industry, Fast Transactions & Secure & Anonymity & Dedicated & Rare, Website:, Follow us:, Chat:, ESend beta Whitepaper, Coin Specs, Algo: X11, 7 days PoW, PoW height: 20160, Total coins after PoW: ~1.2M, Block time: 30 seconds, Block rewards: 84 initially, halving at block 6720, 13440, (1 reward for first 150 blocks), PoS starts block: 15000, PoS interest: 5.9%, PoS Min & Max ages: 24 hours & 30 days, No Premine except IPO, P2P: 14528 RPC: 14529 worldfootballcoin_wfc.png Digishield(0) 2014-05 10 WFC World Football coin 1 block 39 60 3200000 World Football Coin is created for football fans around the world joining crypto currency miners together in football competition using there miner software to compete. Initial Public Coin Offering (IPCO) is 10% of total coins and will be sold to early investors from the World Football Coin store. No premine except 1 block reserved for IPCO as explained in block reward calculation table. boycoin_boy.png 2014-04 BOY BoyCoin >5% 10000000000 Block Time: 30 Seconds Difficulty Retarget Time: 1 hour Premine: over 5% for Justin Bieber Rewards Block 1 â 200,000 : 1-50,000 BOY Reward Block 200,001 â 300,000 : 1-40,000 BOY Reward Block 300,001 â 400,000 : 1-20,000 BOY Reward Block 400,001 â 500,000 : 1-10,000 BOY Reward Block 500,001 â 500,000 : 1-5,000 BOY Reward Block 600,001 â 700,000 : 1-2,500 BOY Reward Block 700,001 â 800,000 : 1-1,250 BOY Reward Block 800,001+ : 500 BOY Reward quitdoughcoin_quit.png 1 block 2015-01 QUIT QuitDough 2015-01-09 10% 120 30000000 TickerSymbol: QUIT, Algorithm:X13, Type:POW/POS, TotalCoins: 30,000,000, then 1-150 (1 Block ) , then 50 Coins per Block, BlockTime: 120, Difficulty Re-target: Every Block, Pre allocation:10% diocoin_dio.png KGW 2014-03 DIO Diocoin 2014-03-29 50 60 21000000 indocoin_idc.png 1 block 2014-05 IDC IndoCoin 2014-05-21 60 blocks 200000 60 194500000000 IndoCoin is aimed towards Indonesians. Rewards are soft capped at 194,500,000,000. After that every block generate 5000 IDC. 1-500,000 = 200,000 IDC 500,001-1,000,000 = 100,000 IDC 1,000,001-1,500,000 = 50,000 IDC 1,500,001-2,000,000 = 25,000 IDC 2,000,001-2,500,000 = 14,000 IDC 2,500,000+ = 5000 IDC ('c', 500000, 'b') navajocoin_nav.png 1 block 2014-07 60 NAV Navajocoin 纳瓦霍币 2014-07-06 30 rebrand of summercoin2 50000000 Symbol NAV, Algorithm X13 PoW/PoS (PoW ended, now PoS), Block Time 30 seconds, Coins swapped 25,232,976 (amount of coins on the SummerCoinV1 chain, swapped at 1:1 ratio), Coins not swapped 260,586.00121295 (donated to the foundation), RPC Port 33333, P2P Port 33330, Proof of Work (ended) , Total Coins (by PoW) 50,000,000 NAV, Difficulty Retarget Block 0 to 250: each 25 blocks, Block 251 and on: adjusted each block, PoW Max Block Height 14000 (in 7 days of PoW), Block Reward 1500, Block Halving Block 14,000 (150 NAV reward), Block 50,000 (15 NAV reward), Block 100,000 (end of PoW), Proof of Stake , Min Age 4 days, Max Age unlimited, Minted Blocks Maturity 60, Stake Interest 20% for the first year, 10% for the second year, 5% for every following year clamscoin_gcs.png 2014-06 120 GCS CLAMScoin 2014-06-10 4.9% 90 100000000 Algorithm: X11 Scrypt- CPU/GPU, ASIC Resistant, Max CLAMS: 100 Million, Block Rewards, month 1 - 3: 60.2 , month 4 - 6: 28.9 , month 7 - 12: 14.5 , month 12 - 36: 7, months 37+: 7, Block time: 90 seconds, Difficulty: Re-Targeting with Dark Gravity Wave DGW, 120 confirmations for blocks to mature, PREMINE: 4.9% reducing overseaschinesecoin_occ.png 2014-03 OCC Overseas Chinese coin 2014-03-01 fuckcoin_fuck.png 2014-01 FUCK Fuckcoin worldtradefundcoin_xwt.png 60 blocks 2014-10 XWT WorldTradeFund 2014-10-20 0 90 rebrand of WTFcoin 10000000 Algorithm: x15, PoW / PoS, TOTAL Coin Count: aprox 10,000,000 XWT, PoW Stage: 3 Days Mining (~90% of Total Supply), Main PoW phase ends with Block #2880, At Block #2881 PoW will continue with 1 XWT per block., PoS starts at Block #2881 with 1% Annual Interest, Block Time: 90 Seconds, Difficulty re-target every 60 blocks, Block Rewards:, Block 1 - 960: 5869 XWT, Block 961 - 1920: 2350 XWT, Block 1921 - 2880: 1180 XWT, Block 2881 - : 1 XWT, Block 70000 FULL POS, No IPO No Pre-mine poopcoin_poo.png 2014-06 POO Poopcoin 2014-06-12 5% 60 1200000 x11, 1.2 million coins max, Hybrid PoW+PoS (PoW for 7 days), 5% Premine to cover cost bounties, operating cost, and getting coin added to exchanges., will release rest of the specs later. giftcardcoin_gift.png 2015-03 50 GIFT GiftCardCoin 2015-03-22 60 7750000 Name: GiftCardCoin , Symbol: GIFT , X13 PoW/PoS Hybrid, Block time: 60 secs , Total Coins: 7,750,000, Confirmations: 50 blocks , PoW Total Blocks: 3,500 , PoS Interest: 92% , Min Stake Age: 6 hrs , Max Stake Age : 12 hrs , RPC Port: 39887 drachmacoin_drac.png DGW 2014-05 DRAC DRACHMacoin 2014-05-07 43 90 65500000 Proof of Work based. Mine using any of the 7 algorithms : sha256d(default), scrypt all the way through x11, x13, and x15., Difficulty is retargeted every block., 90 second block target per algorithm (30 second average across 7 algorithms)., 65.5 million total coins, 43+ coins per block. , Random superblock with reward., Up front super mining rewards., DarkGravityWave stackedcoin_stkc.png 2014-01 STKC Stackedcoin re-target every 2106 blocks, 50 coin block reward, and 21 million total coins. Stock Coin. watcoin_wat.png 2014-06 WAT Watcoin 2014-06-13 1% 99 199997800002 secure hashing algorithm X11 + KGW, making the coin 51% attack resistant and ASIC mining resistant., 99 second Block Targets, 9 Hour Difficulty Readjustments reducing quantumcoin_qtc.png 2013-06 QTC Quantumcoin 2013-06-06 Dead and relaunched scrypt-based cryptocoin (Litecoin clone) with a dynamic re-target, variable then 10 coins per block, and 256 million total coins. nautiluscoin_naut.png DS 2014-05 NAUT Nautiluscoin 8000 16100000 re-target using DigiShield, 8000 coins per block, and 16.1 million total coins. koalacoin_koala.png 2014-05 KOALA Koalacoin 2014-05-19 violetcoin_vlt.png 2014-04 VLT Violetcoin 50 10000000 50 coins per block and 10 million total coins. bankcoin_bank.png DGW 2014-05 10 50 BANK Bankcoin 60 100000000 pure PoS coin. After of 7 day as a PoW, the coin has been transfered to a pure PoS coin. POS generate after 8000 blck. Stake interest: 10%/year POS Min age: 3 day POS Max age: unlimited POW last: 10000 - 7 days emperorcoin_epc.png 2014-02 EPC Emperor coin 2014-02-25 80 240 58880000 kakacoin_kkc.png 2014-02 KKC KaKacoin 30000000.0 wcgcoin_wcg.png 2014-03 WCG WCG Coin 2014-03-11 secondary 100000000 x13coin_x13c.png 2014-06 110 X13C X13Coin 2014-01-06 201 60 50000000 tenfivecoin_10-5.png 4 hours 2014-03 10-5 TenFive Coin 2014-03-05 90 10500000 ASIC resistant and multi-pool protected implementation of a blend of scrypt Adaptive-N and kimotos gravity well. The progressive designed rewards scheme will provide flat rewards for the duration of the mining process. cpu2coin_cp2.png 2013-08 CP2 CPU2Coin yes premined mammothcoin_mamm.png 2014-06 MAMM Mammothcoin 2014-06-02 45 1000000 X13 PoW/PoS, Anonymous Wallet (to come), True random superblocks, PoW/PoS independent, thus very stable, Extreme fast transactions (average 45 seconds), Limited time PoW mining (about 2 months), then will be pure PoS, Coin required held only 1 day before PoS interests generated jfkcoin_jfk.png 2014-03 JFK JFKcoin ferengicoin_fer.png 2014-03 FER Ferengicoin 10000000000.0 re-target every 6 hours, 10,000 coins per block, and 1 billion total coins. Coin details, SHA-256 algorithm (just like Bitcoin to mine with ASIC, GPU or CPU), Block Rate (in seconds) 180, Initial value per block 10,000 coins, Block halving rate 50,000 blocks (3 months or less), Maximum coins: 1,000,000,000 coins (1 billion coins), Target timespan (adjustment of difficulty) re-target every 120 blocks (6 hours approx.), Coinbase maturity 200 blocks (newly miined coins will not become spendable until 200 blocks found to avoid double spending), 1% pre-mined tilecoin_xtc.png 2014-08 XTC 物联币 TileCoin 2014-08-29 100% 600 100000000 Ticker Symbol: XTC, Initial Coins: 100,000,000 of Tilecoin @ 0.00000542 BTC/EACH, Percentage of TileCoin will be burned for transactions., Consensus Algorithm: Counterparty (BTC Blockchain/VIA Blockchain - future), Additional Algorithms: Ed25519 signatures, AES-128, TBA, Coin Distribution: All Coins will be sold via BTER.com @ 0.00000542 BTC/EACH TileCoin (XTC), Block Interval: 10 Minutes (Bitcoin)/24 Seconds (Viacoin - future) galtcoin_glt.png 2014-02 6 GLT Galtcoin 2014-01-31 0 50 premined 42000000 Galt Coin - GaltCoin Abbreviation: GLT Algorithm: SCRYPT Date Founded: 1/31/2014 Total Coins: 42 Million Confirm Per Transaction: 6 Blocks Re-Target Time: Block Time: Block Reward: 50 Coins Diff Adjustment: Premine: 50,000 Coins. enigmacoin_enc.png 2015-04 ENC EnigmaCoin 2015-04-23 0 100 60 1000000 Name: EnigmaCoin, Ticker: ENC, POW+POS (X13), Block reward: 100 ENC (Block 1-100 - reward 1 ENC), Distribution: POW+POS, Money Supply: 1,000,000 ENC , POS starts at 7,000 blocks, POW ends at block height: 10,000, Total POW coins ~ 850,000, Block time: 60 seconds, POS interest: 5% per year gaiacoin_gaia.png 1 block 2014-10 10 240 GAIA GAIAcoin 2014-10-02 100% 60 node.js 24000000 Ticker: GAIA, Total coins: ~24 Million*, Algo: 100% POS, Annual Interest: 5%, Block time: 60 seconds, Min transaction fee: 0.0001 GAIA, Confirmations: 10, maturity: 240, Min stake age: 4 hours, no max age., Difficulty retarget: every block eaglecoin_ea.png 2015-06 EA Eagle 2015-06-30 120 40000000 Name: EAGLE, Ticker: EA, Algorithm: SHA256 POW, Difficulty Retargetting Algorithm: D.G.W. , Time Between Blocks: 120 sec., Block Reward 200, Block Reward Halving Rate 100000, Total Coins: 40000000 ('h', 100000, 'b') altcoin_atc.png 2014-02 ATC Altcoin SHA256 based cryptocoin with 512 coins per block and 268 million total coins. bticoin_bti.png 2015-04 BTI BTIcoin 2015-04-18 100% 60 product support vehicle 6000000 Name: BTI , Total Coins: 6,000,000 , Mining: 100% mined, Staking: 3% POS per year, This is not the traditional cryptocurrency model that is out there now. Our sole use for this crypto is to fund our development and provide people with a way to get our products at a discount. Our currency will be solely used to purchase physical hardware or software from our company at a discount than using fiat or BTC. spaincoin_spa.png 2014-03 SPA SpainCoin premined 50000000.0 scrypt bytecoin_bte.png 2013-04 BTE Bytecoin 2013-04-02 50 600 premined 21000000000 re-target every 2016 blocks, 25 coins per block, and 21 million total coins. The 1:1 bitcoin copy. Relaunched. Bytecoin - Byte Coin Abbreviation: BTE Algorithm: SHA-256 Date Founded: 4/2/2013 Total Coins: 21 Billion Confirm Per Transaction: Re-Target Time: Block Time: 10 Minutes Block Reward: 50 Coins per block Diff Adjustment 2,016 Blocks. ('h', 3153600, 'b') globalcurrencyreservecoin_gcr.png 2015-08 50 GCR GlobalCurrencyReserve 2015-08-19 100% 90 99000000 GCR is the the first home-based business opportunity with its own cryptocurrency coin and immediate opportunities for wealth-building and personal success., Type: Proof of Stake (PoS), Block Time: 90 seconds, Stake Interest: 5%, Block Maturity: 50 blocks, Minimum Coin Age: 8 hours socialnetworkcoin_snc.png 2014-07 10 50 SNC SocialNetworkcoin 2014-07-16 0 60 60000000000 INTRO, SNCoin is the first crypto-currency based on Bitcoin Theory rewards social media popularity., Proof of Popularity(PoP) and Proof of Work(PoW) both works in SNC System., SNCoin is designed with energy conservation and fair distribution in mind., Shared Weight from Friends Later Mechanism(SWFLM) made possible to quantify the popularities of social media users., Avoiding Botnet Attack while being friendly to Key Opinions Leaders., Mining with SCRYPT algorithm., SPECIFICATIONS, Block time : 1 minute, Difficulty Retarget : every block, Nominal Stake Interest : 1% annually, Min Transaction Fee : 0.0001 SNC, Fees are paid to miners, Confirmations : 10, Maturity : 500, Min Stake Age : 8 hours, no max age, P2P Port : 15712, RPC Port : 15713, Proof of Work : 59940000000 SNC, Algorithm : SCRYPT, Block Reward : 6000000 SNC, no halving, Height : 11 - 10000, MECHANISM, Weight = Max(6, Friends * (1 + Min(3, Followers / Followings))), SharedWeight = Weight / Followings, PV = Sum(SharedWeight)/ 6 + Weight, DISTRIBUTION SCHEDULE, In total, the amount of SNCoin is 60,000,000,000 (60 billion)., 20 billion coins will be fairly distributed among users according to their individual Proof of Popularity(PoP) PVs calculated in according to SWFLM and SNCoin Distribution Mechanism., 20 billion coins will be mined by miners on the basis of Proof of Works(PoW) and Mining Mechanism., 6 billion coins are scheduled for First IPO of SNCoin., 12 billion coins are scheduled for Second IPO of SNCoin., 2 billion coins rest are reserved for R&D and Business Operation teams of SNCoin., 6 billion SNCoins will be distributed in First IPO, BTC Address 1NDZLAfndv5rMj2gbgysC9fZaPM2Ta8TDo, LTC Address LastFZppJJPyeZE2535hw7eAHuWrK578VZ, The volume of BTC approach in this stage of IPO is 3 billion SNCoins., The volume of LTC approach in this stage of IPO is 3 billion SNCoins., (more IPO details referencing the WHITE PAPER), [ First IPO Period ] 00:00:00 on July 17th, 2014 ~ 00:00:00 July 24rd, 2014, DISTRIBUTION START-TIME, Upon finishing the development of SNCoin QT Wallet, SNCoin Official Website and Official SNCoin Distribution Platform., APPROACH, See more informations in our WHITE PAPER :, CONTACT US, Email : contact@sncoin.org, PS: For better network promotion of SNCoin, we need AGENT TEAMS all over the world (for different social-medias) who could share specific quota of reserved coins. Just email us if you are interested in. cashcoin_cash.png 2014-02 CASH Cash coin 47433600 variable re-target, variable block reward, and 47.4 million total coins. 10% yearly (0.6%), SA 30/90, Coins 2.26 fitcoin_fit.png 1 minute 2014-05 120 FIT Fitcoin 2014-05-14 0 {'gpu': '2222222/(((Difficulty+2600)/9)^2)', 'cpu': '11111.0 / (pow((dDiff+51.0)/6.0,2.0))'} 60 90000000 Fitcoin is the first digital currency that provides to its community the chance to have a healthier life. ('h', 1, 'y') solcoin_sol.png 2014-01 SOL Solcoin rabbitcoin_rbbt.png 2014-03 RBBT RabbitCoin 100000000000.0 supernetasset_token.png 2014-09 TOKEN SuperNET 2014-09-06 asset:15641806960898178066 SuperNET There is no limit on the amount of TOKEN available overall, though there may be short-term restrictions on TOKEN available at a given price if volumes are very high. If a large number are sold, this does not dilute other buyers' holdings, since SuperNET is backed by the cryptocurrency paid: SuperNET is not a cryptocurrency and this is not like a typical coin offering. (SuperNET is a basket of cryptocurrencies and revenue-generating assets than can be considered more like a closed-ended mutual fund.) The fundraiser will start at 14:00 GMT Saturday 6 September. It will continue for 2-4 weeks in total, for as long as there is sufficient interest. At the end, all TOKEN will be converted to UNITY, which can be withdrawn to a NXT wallet or traded on the AE, BTER and other supporting exchanges. bancorcoin_bncr.png 2014-01 BNCR Bancorcoin 2014-01-31 128 60 281600000 unitecoin_uni.png 1440 blocks 2013-12 UNI UniteCoin 2013-12-01 0 50 100000000 Unite Coin - UniteCoin Abbreviation: UNI Algorithm: SCRYPT Date Founded: 12/1/2013 Total Coins: 100 Million Confirm Per Transaction: Re-Target Time: 1440 Blocks Block Time: Block Reward: 50 Coins Diff Adjustment: Premine:. rastacoin_rtc.png 2014-01 RTC Rastacoin 353000000.0 civilisationcoin_civ.png 20 minutes 2014-09 10 50 CIV Civilisationcoin 2014-09-26 1% 1000 45 resuscitated 14000000 This coin was brought to my attention by it's developer (civilization.team) sometime back when allegedly according to him his first launch failed due to the coder then mined the pre-mine and ran off leaving him without anything , I do not know how true this or cannot comment on the same , anyhow he wanted me to change the port and regenerate the genesis which I did and sent back the source and wallet but then never heard of him. I had this lying around for quite a while and thought I'd share with your for better or worse. cybercoin_cc.png 2015-03 10 61 CC CyberCoin 2015-03-31 0 64 -1 CyberCoin PoW/vPoS 2.0, Release date: 9 PM GMT Tuesday March 31st, 2015/ No premine No ICO, Ticker: CC, Hashing algorithm: Scrypt, Interest: vPos 2.0 variable, Block generation: 64 seconds, Transaction confirmations: 10, Min age: 1 hour, no max age, Maturity: 61 confirmations standardcoin_std.png 2014-03 STD StandardCoin 400000000 mugatucoin_muga.png 2014-05 MUGA Mugatucoin 2014-05-31 0 30 50000000 PoW/PoS-based cryptocurrency with NO IPO and NO Premine. The purpose of MugatuCoin is to take over the world, by any means necessary. bellacoin_bela.png 2014-02 BELA Bellacoin re-target every 1000 blocks, 50 coins per block, and 54 million total coins. litecoinnew_ltn.png 6 hours 2015-01 120 LTN LitecoinNEW 2015-01-12 0 50 60 84000000 Start difficulty : 0.00024414, Premine 0%, Initial public offering (IPO) 0%,, Coin properties, Coin type Litecoin (Scrypt), Halving 840.000 blocks, Initial coins per block 50 coins, Target spacing 1 min, Target timespan 6 h, Coinbase maturity 120 blocks, Max coinbase 84.000.000 coins ('h', 840000, 'b') chaincoin_chain.png 2014-05 CHAIN ChainCoin pimpcashcoin_pimp.png 2014-11 10 69 PIMP Pimpcash 2014-11-23 3.9% 2000 69 69000000 69,000,000 Total PoW supply, 2,000 Coins per PoW block, PoW Algorithm: Scrypt, PoW + PoS Hybrid, PoS interest 69% Annually, PoS Min Stake Time: 8 hr, Pos Max Stake Time: Unlimited, 69 sec block target, 69 confirmations for blocks to mature, PoW End block: 33,186, transactions = 10 confirmations, rpcport=6969, port=6970, test ports = ( RPCport 7979 ) ( Port 7980 ), Blocks, 1 : 3.9% for Dev Fund ( 2,691,000 PIMP ), 2 - 30 : 1 PIMP (Anti Instamine), 31 - 33,186 : 2000 PIMP quedoscoin_qdos.png 2015-11 QDOS Quedos 2015-11-11 38400000 50 60 96000000 Type: Pure POW, Algorithm: X11, Blocksize: 8 MB, Total Supply: 96,000,000, Blocktime: 60 Seconds, Block Reward: 50 QDOS ('h', 576000, 'b') flirtcoin_flirt.png 1 second 2014-11 FLIRT Flirtcoin 2014-11-07 10000 300 1800000000 Flirtcoin is the official digital currency of Flirt Life. Flirtcoin provides adults with a new and exciting way to show their interest in each other “Flirting”!, Flirtcoin Technical Specifications:, Algorithm: SHA-256, Block Reward (initial): 10,000 Flirtcoins, Block Time: 5 minutes (300 seconds), Difficulty Retarget: 1 second (each block), Halving Interval: 75,000 blocks, Maximum Supply: 1,800,000,000 Flirtcoins (1.8 Billion) ('h', 75000, 'b') macdcoin_macd.png 2014-06 288 30 MACD MACDcoin 2014-06-12 0 variable 60 12000000 X13 algorithm, 12 Million Coin Proof-of-Work (PoW) Maximum*, 0% Premine, 7 Day Maximum Proof-of-Work (PoW) Mining Phase*, Difficulty retargets every block - nothing silly to interfere with the Proof-of-Stake (PoS) implementation, Confirmations - 288, Maturity - 30, 12% Proof-of-Stake (PoS)*, Min stake age - 1 Hour, Max stake age - 1 Day, , BLOCK REWARDS, 1-100: 10 MACDCoins, 101-500: 1,250 MACDCoins, 501-1500: 1,500 MACDCoins, 1501-3000: 1,250 MACDCoins, 3001-7000: 1,000 MACDCoins, 7001-8500: 1,250 MACDCoins, 8501-10000: 1,500 MACDCoins dokdocoin_ddc.png 2014-03 DDC Dokdocoin 50000000.0 re-target using Kimoto's Gravity Well, 100 coins per block, and 50 million total coins. bitzcoin_bitz.png 10 - 20 minutes 2015-02 BITZ Bitz 2015-02-13 60 20000000 Algo: X11 Pow/PoS hybrid, PoW has ended at block 100,000: ~ 2,000,000 coins, Time per block: ~ 1 minute (both PoW and PoS), Difficulty Re-target: 10-20 minutes, PoS 10% per year from block 0, As there will only be a finite amount of BITZ in existence the value will be determined by the demand for the currency. It has now been decided with consent in the community that the PoW phase will end at block 100,000 which is 2,000,000 BITZ. It will then be a pure PoS currency with an annual interest rate of 10%. This will keep BITZ in the realm of relatively rare when widely adopted but not deflationary like Bitcoin. bamitcoin_bam.png 2015-06 80 BAM Bamit 2015-06-27 1% 60 -1 Name: Bamit, Ticker: BAM, Algo: X11, Block time: 60 seconds, Block reward: 101-116, PoW Supply: 947307 BAM, Last PoW block: 8640 6 Days PoW, PoS Interest: 23% per year, RPC Port: 9988, Coinbase Maturity: 80, 1% Premine (Block 1), Fair Start (No reward Blocks 2-10), Reward Blocks begin on block 11 surgecoin_srg.png 2014-04 SRG Surge Coin 50000000 PoW Algorithm: Scrypt Difficulty Re-target: KGW Block Time: 60 seconds Initial Block Reward: 50 Coins per block Max Supply: 50-Million Surge Coins horsepowercoin_hsp.png 2 mins 2015-09 120 120 HSP HorsePowerCoin 2015-09-20 6% 60 1000000 Name: HorsePower, Symbol: HSP, Algo: Scrypt, PoW, Block time: 1 min, Retargeting: 2 mins, Maturity: 120 blocks, Confirmations: 120, P2P Port: 32321, RPC Port: 32322, Premine: 6%, horsepowercoin.conf, Blocks 2 - 1000 = 100 Coins, Blocks 1001 - 3000 = 200 Coins, Blocks 3001 - 6000 = 300 Coins, Blocks 6001+ = 250 Coins arpacoin_arp.png 2015-06 200 ARP ArpaCoin 2015-06-03 60 400000000 Sha256D, Dynamic rewards , Block Spacing: 60 Seconds , Stake Minimum Age: 5 Hours, POS 2.0, Port: 56631 RPC Port: 57631 , Max POW blocks: 5000, Anti-instamine of 30 blocks., 100 coins per block from block 31-5000, 50 coins per block from block 5001-10000, 25 coins per block from block 10001-15000, 12.5 coins per block from block 15001-20000, 10 coins per block from block 20001 and beyond., Our Arpanodes will get 60% of pos rewards hobonickels_hbn.png 2013-07 HBN HoboNickels 2013-07-24 0 5 30 120000000 Fast NVC clone. Extreme, over 100% ROI. 0 starting diff. Hobo Nickels Coin - HoboNickelsCoin Abbreviation: HBN Algorithm: SCRYPT Date Founded: Total Coins: 120 Million Confirm Per Transaction: Re-Target Time: Block Time: 30 Seconds Block Reward: 5 Coins Novacoin-based. None 100% NVCS (1.9%), SA 10/30, Coins 5.71 entrustcoin_etrust.png 2015-04 125 ETRUST Entrustcoin 2015-04-04 20000 10 60 experimental 1000000 Welcome to eTRUST, the only coin with a system designed to give a guaranteed 10% return every day. The blocks will pay out as follows: 10 Total, 2 eTrust Coins rewarded to a miner and 8 eTrust Coins deposited into the eTrust payout address (publicly listed) If you send money in to the address, your payout will be 110% with the funds accumulated from miners within a particular duration. Everyone will be able to receive the 110% without ever having to lose due to the skimmed funds always being vested into the fund account.By utilizing this particular 4:1 ratio we will be able to pay back EVERYONE. Premine will be used for giveaways and promotions and future developments., Ticker: eTRUST, Distribution: PoW - Fountain/Investing System, Algorithm: Scrypt, Block Time: 60 Seconds, Premine: 20,000 eTRUST, Maturity: 125 Confirmations, Block Reward 10, 8 to eTRUST funds account, 2 to the miner cthulhucoin_off.png 20 blocks 2013-09 30 OFF Cthulhu 2013-09-14 0 5 60 premined Quark-based cryptocoin with re-target every 20 blocks and 5 coins per block. Cthulhu Coin - Cthulhucoin Abbreviation: OFF Algorithm: SCRYPT Date Founded: 9/14/2013 Total Coins: Confirm Per Transaction: 30 Blocks Re-Target Time: 20 Blocks Block Time: 1 Minute Block Reward: 5 Coins Per Block, halving after 6 months Diff Adjustment: 20 Blocks Premine: 10k. ('h', 6, 'm') evilcoin_evil.png 2015-12 5 EVIL Evilcoin 2015-12-01 0 60 21024000 Coin Name : Evil coin, Ticker : EVIL, Algo : x11, POS or POW : Both, PoS Min Age : 1 Hour, PoS Max Age : 720 Hours, Mature : 5 Blocks, Max Coin Supply : 21024000, PoS : 2 %, Block Time : 60 seconds, Last PoW Block : 525600, PoS Start after : 525000, Premine : 0 libertyreservecoin_lr.png 2014-12 LR LibertyReserve 2014-12-20 10000000 60 25000000 25mil total coins (10 mil pre mined) 100% pos Last pow block 5000 (all blocks of 0 reward) nonamecoin_nonc.png 1 block 2014-11 7 120 NONC NoNameCoin 2014-11-17 5% 450 60 12000000 opalcoin_opal.png 2014-09 OPAL Opalcoin 2014-09-11 0 1000 90 relaunch 15000000 Opal – X13 CryptoCurrency, Algorithm: X13 POW/POS starts on block 15,000,, Opal is a re-brand, and entire re-release of OnyxCoin V2. The first OnyxCoin (V1) was, unfortunately, a scam. The original developer had coded a hidden premine and sold these coins when Onyx initially hit an exchange – the coin effectively died at this point. Shortly after this another developer decided to try and resurrect OnyxCoin by launching a new version of the coin (named OnyxCoin V2, surprisingly!) after making some changes to the PoW schedule, and removing the hidden premine. OnyxCoin V2 was a completely new coin – 100% detached from the original OnyxCoin – in everything except the name. The launch was successful, as was the mining period – and also the transition to PoS. Sadly, the Onyx name was too far tarnished by the original scam, and interest in the coin has faded. It is at this point that Opal was born. Opal uses the OnyxCoin V2 blockchain, which has always been a fresh new chain, but has been completely re-branded and is now backed by a team of committed members who wish to see the coin succeed. We liked what we saw in the codebase, and the wallet, and felt this was a great foundation from which we could build on - the first task being to detach it completely from the original name., RPC Port: 51990, P2P Port: 50990 informationcoin_itc.png 1 block 2014-05 ITC Informationcoin 70000000 re-target every 1 block, 0 coins per block (POS only), and 70 million total coins. solacecoin_solis.png 2015-08 SOLIS Solace 2015-08-30 0 <5 60 8800000 Solace has a very low emission rate that will produce no more than 8.8m coins [no premine]. Solace algorithm is proof-of-work forked from cryptonote. Block reward is <5 coins, where as other coins block rewards are 25+ coins. Solace overall goal is to build curiosity within the crypto world and challenge the potential users of Solace to understand how cryptonote algo works and what minor changes were made to Solace. blackcatcoin_bcat.png 2014-06 BCAT Blackcatcoin 2014-06-02 0 30 1100000 POW LAST BLOCK: 10000, Block time: 30 seconds, POS Min age: 15 minute, POS Max age: unlimited, Confirmations: 10, Maturity: 400, Stake interest 5%, PREMINE: 0% chavezcoin_chvz.png 2015-06 30 days CHVZ Chavezcoin 2015-06-17 100% 60 16382264 Algorithm: X11, Intial coins: 8.191.132, total coins: 16.382.264, BlockTime: 60 sec, Stake: 5% anual, Minimum Stake time: 7 days, Coin Maturity: 30 days airdrop-faucet Dissemination via airdrop initially, then by faucet. sheckelcoin_sks.png 2014-10 160 SKS Sheckel 2014-10-07 4% 24 300000000 Encrypted Messaging, Stealth Addressing, POW MAX Height: 50 million (Roughly 15 years), Block Times 24s, Premine 4%, MAX COIN: 300 Million, Est Supply: 250M, 160 confirms for maturity, 2% Interest, Staking starts 1hr after mining begins, Payout initially is 10 SKS, Every block, payouts are reduced by 20 satoshis stronghandscoin_shnd.png 2015-09 SHND StrongHands 2015-09-30 150 1000000 Coin Stats, name - StrongHands, PoW Algo - Sha256D, PoS - min stake age - 30 days, PoS - Reward 100%, Blocktime - 2.5 mins ecocoin_eco.png 2.25 minutes 2013-08 5 20 ECO Ecocoin 2013-08-26 0 45 10200000 Scrypt-based Eco Coin - EcoCoin Abbreviation: EGC Algorithm: SCRYPT Date Founded: 8/26/2013 Total Coins: 10.2 Million Confirm Per Transaction: 5 for tx 20 for mint Re-Target Time: 2.25 Minutes Block Time: 45 Seconds Block Reward: 7 Coins - Varies Diff Adjustment: Premine:. (7, 'v') indexcoin_idc.png 2014-01 IDC Indexcoin Scrypt storjcoinx_sjcx.png 2014-07 SJCX Storjcoin X 2014-07-18 asset: 500000000. firecoin2_fc2.png 2014-05 FC2 Firecoin2 2014-05-16 retarget every 2016 blocks, 200 coins per block, and 336 million total coins. gamecoin_gme.png 30 minutes / 12 blocks 2013-05 GME GameCoin 2013-05-12 0 1000 150 1670000000 Unfinished FTC clone , killed on launch. Failed to revive once, now finally revived. Game Coins - GameCoins Abbreviation: GME Algorithm: SCRYPT Date Founded: 5/12/2013 Total Coins: 1.67 Billion Confirm Per Transaction: Re-Target Time: 30 Minutes Block Time: 2.5 Minutes Block Reward: 1000 Coins Diff Adjustment: 12 Blocks Premine:. mastertradercoin_mtr.png 2015-02 3 50 MTR MasterTradercoin 2015-02-13 10000000 60 trading system 10110000 MasterTrader Coin is new cryptocurrency aimed at developing social connections with avid Crypto enthusiasts seeking professional insight, analysis, and ideologies revealing the art of trading. ascentcoin_asc.png 1 block 2014-05 4 30 ASCE Ascentcoin 2014-05-16 2% 150 30 25000000 50000 POW blocks Ascentcoin is deisgned by using X11 Algorith , this will lower your heat and power usage while mining so it will cost you less and damage your hardware less and you can hold more! PoW Max coins: 75,000,00 PoS interest 15%, Minimum Coin Age: 8 Hours PoW Algorithm: X11, PoW + PoS, Symbol: ASCE, PoS interest 15%, , Minimum Coin Age: 8 Hours, 30 second block target, 30 confirmations for blocks to mature!, Retarget difficulty each block, PoW Total blocks: 50,000 POW blocks, PoW Payout: 150 per block, PoW Max coins: 7.5 Million, PoW Confirmations: 4, NO IPO, 2% Premine, ('h', 50000, 'b') edwardsnowdencoin_esc.png 4 blocks 2015-05 6 200 ESC EdwardSnowdenCoin 2015-05-13 0 10000 60 2628000 Edward Snowden Coin 0.0.1 [ESC] Node Hardcoded, Proof of Work SHA256D Cryptocurrency based on Unobtanium 0.9.x (Bitcoin 0.8.99), Kimoto's Gravity Well, EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(28.2)), -1.228)), BlocksTargetSpacing = 1 * 60, PastSecondsMin = TimeDaySeconds * 0.23, PastSecondsMax = TimeDaySeconds * 1, Block Reward Halving = Every Day, Blockchain, Constant block reward: 10000, Maximum coins: Much More, 200 blocks to mature, 6 transactions to confirm, Minimum subsidy of 0.00000010 for all transactions, Estimated PoW lifespan: 5 years ('h', 1, 'd') roseonlycoin_rsc.png 2014-05 RSC Roseonlycoin unattaniumcoin_unat.png 4 blocks 2014-09 6 UNAT Unattaniumcoin 2014-09-10 0.16 8 1000000 Algo: SHA-256, Reward: 0.16 UNAT, Spacing: 8 seconds, Retarget: Every 4 blocks, Target block time will be 8 seconds after block 52, down from 2.5 hours, Block Reward has been decreased to 0.16 UNAT per block, down from 200, Difficulty will readjust every 4 block, up from 2, The daily mining yield remains unchanged. Roughly 1920 UNAT is the target mining reward across the entire network, both pre and post fork, The ratio of networkseconds:UNAT per day remains unchanged, Updated seednodes, New icons and logos, 6 confirmations will now take only 48 seconds, down from 128 days, Network hashrate now available in Windows and OSX via getnetworkhashps, Network hashrate based on the last 5 blocks, down from 120, nopecoin_nope.png 2014-10 10 10 NOPE NopeCoin 2014-10-18 10% 60 20259990 X-11 algo 20259990 coins, POS 13% starts at Block 500, 60 Sec Block Time, 10 Confirms on TXs, 10 Confirms on mined blocks, Stake Age Min=1hr, Max=10days, Premine - ~10% leprocoin_lpc.png KGW 2014-01 LPC Leprocoin 2014-01-05 0 4 15 42100000 Lepro Coin - LeproCoin Abbreviation: LPC Algorithm: SCRYPT Date Founded: 1/5/2014 Total Coins: 42.1 Million Confirm Per Transaction: Re-Target Time: Kimoto Gravity Well Block Time: 15 Seconds Block Reward: 4 Coins Diff Adjustment: Kimoto Gravity Well Premine: None. volumecoin2_vol.png KGW 2015-08 VOL Volume 2015-08-15 1600 16 60 relaunch 10000000 ('h', 50000, 'b') airdrop Dissemination to denizens huntercoin_huc.png 1 block 2016 lookback 2014-01 HUC HunterCoin 2014-01-27 0 10 60 entertainment 42000000 Merge-mineable SHA256 / scrypt cryptocoin with re-target every block, 1 coins per block, and 42 total coins. Coin that you can aquire by playing simple game. Hunter Coin - HunterCoin Abbreviation: HUC Algorithm: SHA256 + Scrypt Date Founded: 1/27/2014 Total Coins: 42 Million Confirm Per Transaction: Re-Target Time: Every Block Based On Last 2016 Blocks Block Time: 1 Minute Block Reward: 10 Coins per block Diff Adjustment: Every Block Based On Last 2016 Blocks Premine:. litebar_ltb.png 24 hours 2014-02 4 LTB LiteBar 2014-02-12 0 (1, 5) 180 1350000 Lite Bar - Litebar Abbreviation: LTB Algorithm: SCRYPT Date Founded: 2/12/2014 Total Coins: 1.35 Million Confirm Per Transaction: 4 Per TX Re-Target Time: 24 Hours Block Time: 3 Minutes Block Reward: 1-5 Bars Diff Adjustment: Premine: None. ravencoin_rvn.png 2015-04 RVN Raven 2015-04-16 100% 60 760000 Raven [RVN], Algorithm: Scrypt, Block Time: 60 seconds 6% POS, Supply: 760,000 RVN, Stake Min Age: 1 Hours, ABOUT RAVEN Raven is an entirely proof of stake cryptocurrency, to be distributed through a sale of the total pre-stake supply at a price of 0.00002500 BTC per RVN. To prevent a staking monopoly, the maximum puchase of RVN will be 0.5 BTC per user. The minimum purchase is 0.001 BTC. To prevent sockpuppet accounts, buyers must provide a platform to show their active participation in the cryptocurrency community (Reddit, Twitter, Bitcointalk, etc). The supply sale will last for 14 days and any unsold RVN will be burned., We, as the development team, will retain 0.25% of the supply to stake until the network is stable, at which point these coins will then be burned also., Proof-of-work is about securing the network, but mining costs on alternative cryptocurrencies are so small that this proof-of-work system only gives the illusion of security. We have decided to distribute through a small premine sale in order to raise funds for the core developer to be able to dedicate himself to securing the network on an ongoing basis, run full nodes and provide core services for the Raven network., Raven currently includes stealth addressing and encrypted messaging, with a primary goal to implement a modified version of DarkSend and MeshNodes. To purchase RVN in the supply sale, you must send between 0.001 BTC and 0.5 BTC to the following development address: 1Cw8kuwcNFgejkdY93nJmNgq6gRC5AEnrv. Then, please message this account with your transaction ID, corresponding RVN address, and active cryptocurrency community account (if it is not your Bitcointalk account), your Raven will be delivered to the RVN address after you have passed the sockpuppet checklist. Otherwise, your BTC will be refunded. turroncoin_tur.png 2015-08 TUR Turroncoin 2015-08-31 100000 60 1000000 TURRON, TUR, POS, 8% A Year, Min Stake Age is 2 hours, Max Stake Age is 7 days, Minting is 50 blocks, Tx Confs is 2, TX Fee is 0.0001 TUR, 100,000 POW Supply in block 1 for the TIO servxcoin_xsx.png 2014-08 XSX ServX 2014-08-21 75000 12 60 12000000 Coin Name: ServX, Symbol: XSX, Algorithm: Scrypt Pure PoW, Total Coins: 12 Million, Block time: 60 Seconds, Block Reward: 12 Coins (No halving), Premine: 75,000 chncoin_cnc.png 2013-11 CNC CHNcoin 中国币 2013-11-03 88 60 premined 462500000 re-target every block, 88 coins per block, and 462.5 million total coins. LTC clone, announced on chinese forum first.0 starting diff Chinacoin - China Coin Abbreviation: CNC Algorithm: SCRYPT Date Founded: 11/3/2013 Total Coins: 462.5 Million Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 88 Coins Diff Adjustment. bitburst_btb.png KGW 2014-05 BTB Bitburst 2.6% 0.0035 15000 We have developed a new wallet with a new Graphic User Interface with many additional built-in functions: real-time price of major exchanges, webchat, etc. dashcoin_dash.png 2014-01 DASH 达世币 Dash 2014-01-18 84000000.0 re-target using Kimoto's Gravity Well, block reward controlled by moores law, and 84 total coins. mapcoin_mapc.png 2015-08 MAPC Mapcoin 2015-08-20 100% 60 3000000 Total supply: ~3,000,000 (3MM), Premine: 100% Premine, Devs Premine: 1%, Algorithm: x11 (POS), PoS: 2% a year latium_lat.png 2014-05 LAT Latium 2014-05-15 swansoncoin_ron.png 2014-03 RON SwansonCoin 742000.0 re-target using Kimoto's Gravity Well, random block reward, and 741,776 total coins. fakecoin_fkc.png 2014-03 FKC FakeCoin 900000000 - scrypt-based cryptocurrency - 6 confirmations for transactions - 50 coins per block - 899999900 total coins - 2016 blocks for changing difficulty frogcoin_fgc.png 2015-03 30 FGC Frogcoin 2015-03-20 250 25 200000000 Algorithm: Scrypt, Reward: 250 FGC, Block Time: 25 sec, Maturity: 30 blocks, Havling every : 400000 blocks, Total coins: 200000000 ('h', 400000, 'b') gaycoin_gay.png 2014-03 GAY Gaycoin 15600000.0 1560 coins per block and 15.6 million total coins. pandacoin_pnd.png 2014-02 PND Pandacoin 100000000.0 osmosiscoin_osmo.png 2014-03 OSMO Osmosis 1% 15500000 Block time: 2 Min Retarget up: Every 5 blocks (max 100%) Retarget down: Every block (max 200%) Block reward: 0.1 OSM Total supply: 15,482,880 + 1% PA Premine: 100% monkeycoin_mky.png 2014-02 MKY Monkeycoin 2014-02-21 antarctic_acc.png 2014-05 120 ACC Antarctic ((1, 250), 1000) 60 120000000 kimdotcoin_dot.png 2014-03 DOT Kimdotcoin 500 890000000 500 coins per block and 890 million total coins. quantradilbrycoin_q1.png 2015-07 Q1 Quantradilbry 2015-07-28 60 1000000 Name: Quantradilbry, Ticker: Q1, Algo: Srypt, Block Time: 1min waccoingold_wacg.png KGW 2014-06 WACG WaccoinGold 2014-06-21 60 600000000 Algoritm Hashing: Scrypt POW , Max Coin: 600 Millones, Block Reward: The reward of waccoin was specially designed to guarantee the value of the coin. The system was designed under the advice of experts in macro and micro economy to be a stable and profitable coin in short and long therm. When the difficulty grows the reward grows as well following the function described in the graphic: , Excel Block Reward and profit calculator:, Block time: 60 segundos , Dificulty retarget: Kimoto Gravity Well (KGW = 1 + (0,7084 * (PastBlocksMass/144)^(-1,228)), Cloud Seed Nodes (Faster conections) , Built-in DNS Seed computercoin_cmpt.png 2014-07 6 120 CMPT Computercoin 2014-07-10 3,000,000 CMPT(20%) will pre-mine for IPO 60 14580000 Computercoin., Computercoin is a cryptocurrency with X13 built in., Exchange:, Coin-ga:, Vote:, Vote on Cryptoine!, Vote on Bter!, Logo desgin:, First:, Shareholder list:,, About:, The computer is a great initiative, it changed the human world, changed people's lives. Wherever you are, whatever you do, you must use it. This is a miracle., Computer linked to the people around the world, whether rich or poor, beautiful or ugly, we can equal exchange., n this world, no computer can not do a thing. Shopping, travel, cooking, live, learn, work, any industry, any corner will have his presence. Computercoin will achieve all of the above functions. Do not believe? Let us take a closer look at it. , Specifications:, Algorithm: X13 + POS, Total: 14,580,000, Difficulty retarget : every block, Mined Block Confirmation: 120, Transaction Confirmation: 6, Block Rewards:, 0: (4million for IPO), Block 1-200: 0.1 block rewards, After 201 block there will be 2000 rewards, until 7 days later.(8200 block), Pow:14,580,000, PORT:, Port: 9461, rpcport=9462, Wallet:, Windows:, V1.0.0.1, Mac:, Source:, POS Interest:, Min stake age: 1 day, Max stake age: 1000 days, 18% in the first year, 15% the second year, 13% in the third year, 10% in the fourth year, Stay at 5% final, IPO:, IPO is necessary to run the project. Part of the funds will be used to stabilize the market., IPO will last 10 days, do not miss a chance, otherwise you will regret., IPO total coins: 4 million(20%).Total shares:100, Up Investment: 10 shares,every share contains 40000 CMPT., Each stage ended, Then go to the next stage., New shares:, Stage 1: 0.05 BTC (25/25 shares) supply 1,000,000CMPT, Stage 2: 0.08 BTC (7/35 shares) supply 1,600,000CMPT, Stage 3: 0.14 BTC (0/20 shares) supply 1,000,000CMPT, Bounties and Giveaways: 20 shares supply 800,000CMPT, My BTC address:1awRRT56LyLpdHtBzj5owqNXpFfndNWjq, Escrow by cooldgamer;u=79720, Fee:1%. It is paid by buyer,it means if you join 0.05BTC,you need paid 0.05*0.01+0.05=0.0505BTC., Round 1:1Ggy8KkkMq3pohBpLYqGWfhh2Jp9zRv1FT, Round 2:12ibX1dMf8BUxTi4izDEc5pPFtb7Q5kcp7, Don't forget send me and escrow a private message:, 1) The amount you're investing., 2) Txid, Pool:, Hashharder:, 0% fee,, IRC: ,, Block Explorer:, First:, Second:, Faucet:, , Game: , , Translation: , Chinese:. by Ribuce.J 10000CMPT, German: by DrMotz 10000CMPT, Portuguese: by Mxrider420z, Spanish: by alexvillas, Romanian: by kencoles, Italian: by Anon39, Bouties:: , Multipool: 2 shares., Mac wallet:2 share., Logo design:1 share., Android wallet:1 share., Translation:10,000CMPT, First Block Explorer:1 share, Second Block Explorer:0.8 share, Faucet:1share., Game: 1 share., FAQ: , 1,Why IPO? , Some people have not hash to mine, but they want join in. Do you agree?, 2,How much will pre-mine coins?, 3,000,000 CMPT(20%) will pre-mine for IPO. Do you agree?, 3,When the pre-mine coins distribution? , Within one hour after launch wallet, download it and PM me ,or you can send mail to me Do you agree?, Have any questions? PM me or send e-mail to me., E-mail:computercoindev@gmail.com, Twitter: polishcoin_pcc.png 2014-03 PCC PolishCoin 150000000.0 v8coin_v8.png 2014-07 20 V8 V8coin 2014-07-01 1% 500 120 10000000 Total Number of POW coins : 10,000,000, Block rewards : 500, Number of blocks : 20000, Block Time : 120 sec, Confirmation : 20, POS : 1% yearly, Premine : 1% = 100,000 V8, IPO : 3%= 300,000 V8, First 50 blocks reward will be 1 V8 kingdomcoin_king.png 1 block 2014-11 KING Kingdomcoin 2014-11-18 30% 60 2417600 Algorithm: X13 -(blake, bmw, groestl, jh, keccak, skein, luffa, cubehash, shavite, simd, echo, hamsi, fugue), Ticker: KING, Total supply: 2.417.600 KING , Max POW Height: 20160 ( 14 days in total ), Block Time: 60s, PoS Interest: 2.5% per year, Min stake Age: 2.5 hours, ICO premine is 30%, 750000 coins will be up for sale., Diff retarget every block (it’s the safest and best option for smooth mining), Premine will be located in the first block and we will set a 0 block reward for the initial 100 blocks. Hope this is fair for everyone. insanitycoin2_ins.png 2014-12 1 INS Insanitycoin 2014-12-15 5% 1 43200 1000000 Specifications:, Max total coins: 239234, Block time: 12 hrs, Coins per block: 1, Maturity after: 1 block, Premine: 5% (11961 coins), Points of note 1) Insanely difficult to mine and very scarce 2) May or may not be a joke (depends on how I feel and whether people want this to continue) 3) MASSIVE PREMINE - this will take 16.4 years to be matched by mining!!! 4) Novelty 5) Fun (for me) 6) THIS WILL MAKE YOU RICH (warning: for legal reasons this will probably not make you rich), About the premine - a tribute to Nakamoto's huge instamine of Bitcoin and the massive instamine of Darkcoin - although I probably won't dump this early (it's insane not stupid) I like the power of having it as a kill switch in case of establishment takeover - the government/NSA/CIA/Walmart/the guy who owns the shop down the street are not going to get anything useful - if it ever has any value I will auction limited quantities (to reduce the risk of devaluation) in order to fund further (real) technical development - may be used to build a giant golden goose and buy the Spruce Moose flowercoin_flow.png 2014-07 FLOW FlowerCoin 2014-07-30 0 60 500000 English name:Flower Coin, DEV team:a team with worldwide teamers, Our team have 16 workers., Born Time:2014.7.30, Total amount:500000, Premine:ofc no, Flower coin is a X11 coin Roll Eyes Roll Eyes, Wallet:, Sourcecode:, :::::::::: Features :::::::::::, - 60 seconds block time, Which means Fast confirming transactions, - 20000 Blocks, 1-5000 50, 5000-10000 25, 10000-15000 15, 15000-20000 10, - Every trade needs 6 confirminations., :::::::::: MINING POOLS ::::::::::,,,, , we want to make a honest and successful altcoin,and we promise TLC is what we want to make!, :::::::::: BLOCK EXPLORERS :::::::::::,, :::::::::: Something we prefer to do :::::::::::, Flower coin not only represents trend of the future, and is the pioneer of techniques, ideas, and vision., one of Altcoin's first vision is “to remove the concentration”,so with the increasing of altcoins' price,ASIC was borned., ASIC has changed the original beauty, resulting in the altcoin can only be hold in a few people, including the development of ASIC vendors, or early buyers of the ASIC,and “decentralization” disappear since then., so Flower coin to be a poineer to realize 'fairness' and fight against ASIC., we make it impossible to mine Flower coin by ASIC with Adaptive N-Factor, and other Technology:Kimotos Gravity Well,will also be used in Flower coin to ensure the investors' interest., We take new technology to ensure announce the wallet and source code.And the wallet can be used automatically after the end of countdown, so that we can promise nobody can premine. anarchistprimecoin_acp.png 8 hours 2015-03 ACP AnarchistPrime 2015-03-24 0 32 180 53760000 Bitcoin (SHA256) , Halving: 840,000 blocks , Initial coins per block: 32 coins , Target block spacing: 3 min , Diff retarg: 8hrs, Premine: 0, Max ACP minted: 53,760,000 coins ('h', 840000, 'b') coloradocoin_colc.png 2014-09 COLC Coloradocoin 2014-09-05 45 303720000 Algorythm - SCRYPT, Type - Proof of Work/Proof of Stake(6% Annually), PoW Phase - 20,000 blocks, Block Reward - 0.0001 - 1.000(reward raises with difficulty), Time Per Block - 45 seconds, Max Coins - 303,720,000, Pre-Mine - None(Unless IPO then only IPO amount), IPO - TBD(We probably wont be aren't fully decided) gcoin_gcn.png 2014-04 GCN Gcoin 2014-04-08 3% 81 33000000 Gcoin is a Philanthropy based cryptocoin developed by Freemasons for the "love of humanity" Please note: This is not the official cryptocoin of any recognized Masonic body in the world at this time. This coin has been developed by Freemasons, It was developed for charitable purposes only. Scrypt, Kimoto Gravity Well, 2 minute blocks, 150,000 reward per block, Subsidy halves every 533333 blocks, Total Coins: 200 billion., Pre Mine: 40 billion. spktrcoin_spktr.png 2015-06 60 SPKTR Spktr 2015-06-23 60 22700000 Ticker: SPKTR, Algorithm: SHA256, RPC Port: 24511, P2P Port: 24514, 60 Seconds Per Block, 60 Blocks to Confirm, 20MB Blocksize, POW/POS, Min. Staking Age: 3 Hours, Est. Total Supply:1,370,000 POW, 900,000 POS magicoin_magic.png 2014-04 MAGIC Magicoin re-target every 2016 blocks, 50 coins per block, and 84 million total coins. scarcecoin_sec.png 1 block 2014-11 SEC ScarceCoin 2014-11-17 20 0.1 60 26289.99843358 ScarceCoin specs., mining algo: scrypt, 1 minute block targets, 0.1 coins per block, subsidy halves in 131400blocks (~3 Month), First year mined 24647,5 SEC, 26289,99843358 total coins after 5 years and 9 month, 1 blocks to retarget difficulty, Lowered fee: 0.00001000 SEC minimum ('h', 131400, 'b') 7coin_7.png 2014-01 7 7coin 0.00000700 77 7 bitcashcoin_bcsh.png 2014-08 4 120 BCSH Bitcash 2014-08-23 60 5000000 Ticker: BCSH, Distribution: PoW/PoS, Algorithm: x14, Total coins: 5'000'000 in POW, Block Reward:, (first 14400 blocks during ICO phase has 0.01 BCSH reward to mantain the blockchain working and test it safely. 144 BCSH will be produced within this ICO period and will be free distributed to users with giveaways and promotions!), 1-7 Block Days : 100 BCSH, 7 - 14 Block Days: 50 BCSH, 14 - 21 Block Days: 25 BCSH, 21 - 28 Block Days: 12,5 BCSH, 28 - 42 Block Days: 6 BCSH, 42 - 733 Block Days: 3 BCSH, After a total of 733 days of PoW mining BitCash will switch to pure PoS., Total coins to be mined in PoW mining 5 Millions, Block Time: 60 Seconds, TX Confirmation: 4 blocks, Annual interest: 5%, 120 minted block confirmations nicecoin_nic.png 2014-07 4 30 NIC Nicecoin 2014-07-23 100% 60 65000000 As the name signifies, it the nicest crypto currency to provide essential elements to grow the crypto ecosystem with its path breaking strategies., This is the first Coin to promote a new concept called “POHE (TM) - Proof Of Human Effort” instead of machine efforts. You get paid for using NiceCoin in transactions & services related to it., The Coin is a 100% POS and will be distributed very fairly using Marketing Bounty Programs and POHE. The users of Nice Coin will receive coins on engaging with the services using nicecoin, both free and paid services. That means more the engagement, more the earnings!!!,, The coins are called “NIC” and NICs for more than 1 coin. For Decimal places upto 8, we have given name as Nickels., Due to our team marketing efforts, the NiceCoin will open ample new business verticals for accepting crypto currency as an important mode of transacting. We will keep sharing some good news in coming days., Welcome to Nice Days!, Coin Specifications are as follows:, - Total coins: 6.5 Billion NIC, - Algorithm: Scrypt, - 100% PoS, - Symbol: NIC, - PoS interest 5%, - Minimum Coin Age: 1day, - 60 second block target, - 30 confirmations for blocks to mature!, - Retarget difficulty each block, - Coin Confirmations: 4, - NO IPO, - 100% Premine bitsiscoin_bss.png 2015-04 2 250 BSS Bitsis 2015-04-13 0.0018% 25 600 28000000 Abrv: BSs, Algo: Sha256d, Type: Proof Of Work, Coins Per Block: 25, Halving: 105,000 blocks roughly 2 years, Block Time: 10 minutes, Max Coins: 28,000,000, Coinbase Maturity: 250, Confirmations 2, TXFee = 0.01, Expceted Blocks Per Day: 144 Blocks, Expected coins per day: 3600 (BSs), Expected year mining ends: Year 2036, Premine: 0.0018% ('h', 105000, 'b') solocoin_solo.png 2015-08 9 SOLO Solocoin 2015-08-01 4% 120 closed source, revoked ico 312500 Specs SOLOCOIN -SCRYPT-POW Abbreviation -SOLO- Max Money 312500 Total coin POW 300000 Premine 4 % 0-10 Genesis Block BOUNTY,EXCHANGE,DEVELOPMENT,ETC difficulty Launch : 0.00131203 0-150000 1 Coins 150000-300000 0.5 Coins 300000-450000 0.25 Coins 450000-600000 0.125 Coins Block Time 120 Seconds maturity 9 blocks kushcoin_khc.png 2014-01 KHC KushCoin 2014-01-11 4200 420 42000000 Grow KushCoins Solo or start a Grow-Op!!, 4,200 coins per crop, 420 seconds to grow each crop, 420,000,000 total coins, SCRYPT growing. klingondarsek_ked.png 2013-08 KED Klingon Empire Darsek POW / POS and 3.5 coins per block. StarTrek-insipred coin. californiacoin_cac.png 2014-04 CAC CaliforniaCoin 10,000 coins per block and 16.8 billion total coins. node_node.png 2014-05 NODE NodeCoin 节点币 2014-04-30 100% 60 infrastructure, hybrid coin and payment system 600000000 Node.js cryptocurrency implementation exabytecoin_exb.png 1 block 2015-05 6 120 EXB ExaByte 2015-05-25 non 1 30 500000000 ExaByte is mineable with three algorithms and random rewards., Specifications: Proof of Work based. Mine using any of the 3 algorithms : sha256, scrypt or groestl., Default algorithm is sha256, Difficulty is retargeted every block. (4% up ,2 % down), 30 seconds, Max coin will be 500,000,000 coin ever, Random Coins per Block Based on Probability, 48% 5, 25% 10, 15% 25, 11% 50, 1% 100, After block 1,000,000 reward remand at 1 coin.~ ( 1 year ), Confirmations: Confirmations for mined blocks are 120, Transactions require 6 confirmations to become valid. random rewards norrencoin_xnc.png 2015-10 6 21 XNC Norrencoin 2015-10-20 600 10 60 closed source, no blockchain 1000000 The abbreviation is XNC, Proof of Work, 10 XNC of reward each block, Halving every 210000 blocks, Small premine of around 600 XNC, No blockchain, nobody can see your transactions, Coinbase maturity: 21 blocks, Number of confirmations: 6 ('h', 210000, 'b') scrolls_scq.png 1 block 2014-06 3 50 SCQ Scrolls 2014-06-02 1 42 7000000 Block Reward: 1 SCQ, Block Time: 42 sec, Difficulty Re-target: every block w/ KGW, Confirmations: 3, Mined Block Maturity: 50 blocks, Block Halving Rate: Approx. every 4 years, Total Coins: 7,000,000 SCQ. Scrolls is a new, rare, fast cryptocurrency that utilizes the popular Scrypt algorithm. Scrolls also includes Kimoto's Gravity Well (KGW) for efficient difficulty re-targeting after every block making multipool issues a thing of the past. The Scrolls Foundation, which will be implemented shortly, will rely only on donations to support the YAC, Young Archaeologist Club. skynetcoin_snet.png 1 block 2014-08 45 SNET Skynetcoin 2014-08-29 2000000 60 1000000 netcoin2_net2.png 2014-02 NET2 Netcoin2 re-target every 10 blocks, 50 coins per block, and 50 total coins. zeitcoin_zeit.png 1 block 2014-02 4 50 ZEIT Zeitcoin 2014-02-14 0 10000 30 99000000000 re-target every 1 block, 1,000,000 - 250,000 coins per block, and 99 billion total coins. Zeit Coin - ZeitCoin Abbreviation: ZEIT Algorithm: SCRYPT Date Founded: 2/14/2014 Total Coins: 99 Billion Confirm Per Transaction: 4 For TX and 50 For Confirms Re-Target Time: Every Block Block Time: 30 Seconds Block Reward: 10k Coins Diff Adjustment: Premine:. qcoincoin2_qtc.png 2015-10 QRT Qcoin 2015-10-06 91800000 100 64 102000000 Scrypt, POS/POW Hybrid, POW ~102,000,000 QTC, Premine: 91,800,000, 100 Qcoin per block, 1% per year, Block time? 64 seconds, Ports: 19784 for P2P, 19785 for RPC bitokcoin_bok.png 2014-12 BOK BITOK 2014-12-01 100% 60 RU-focused 100000000 Coin BITOK: This POS / POW coin. The number of coins already generated 100 million BITOK Growth of 1% per year growth will continue for about 50 years for the promotion of this coin was collected cash pool of $ 120,000 ue in October 2014, ie 2 months ago. One of the goals of the company - is to bring this to the forefront of a coin, for use in international money transfers, as well as to pay for online stores. silverbulletcoin_svb.png 2015-05 SVB SilverBullet 2015-05-14 0 60 569400 SilverBullet: algo X13, premine: none, ico: none, mining will stop after 13000 blocks. Block time 60s. PoS 6%, Minimum age 1 hour, maximum age 8 hours diraccoin_xdq.png 20 blocks 2014-05 XDQ Diraccoin 2014-05-19 180 multimining 2272800 ISO 4217 Trading Symbol : XDQ, Monetary Symbol :, Target block time : 180 seconds, Difficulty retarget : 20 blocks (every hour), Maximum Coins: 2,272,800, Starting block reward : 8, First reward reduction @ 43201 : 1.25, Second reward reduction @ 744001 : 0.75, Third reward reduction @ 1448001) : 0.5, Fourth reward reduction @ 2145601 : 0.25, Fifth reward reduction (inflation mode) @ 2846401 : 0.01, yolocoin_yolo.png 5 blocks 2014-05 YOLO Yolocoin (1000, 1) 120 7775000 a PoW cryptocurrency with a low inflation rate mooncoin_moon.png 8 hours 2013-12 MOON MoonCoin 2013-12-28 0 29531 90 384400000000 Moon Coin - MoonCoin Abbreviation: MOON Algorithm: SCRYPT Date Founded: 12/28/2013 Total Coins: 384.4 Billion Confirm Per Transaction: Re-Target Time: 8 Hours Block Time: 90 Seconds Block Reward: 29531 MOON Diff Adjustment: 8 Hours Premine:. inkcoin_ink.png 2014-03 INK INKcoin 11000000.0 SHAvite-3 based cryptocoin with re-target every 7 days, 50 coins per block, and 11 million total coins. samsaracoin_smsr.png 2015-06 SMSR Samsaracoin 2015-06-16 66% 75 60000000 Algo: Qubit, Ticker: SMSR, Total supply: 60 million, POW/POS, block time: 75 seconds , minimum stake age: 6 hours , maximum: unlimited , 40m coins will be sold at 150 sats each. POW ends 16100. POS will start on block 16000 so we have smooth transition from pow to pos instaminenuggetsacoin_minew.png 3 hours 2015-02 3 MINEW InstaMineNuggetsA 2015-02-27 750000 50 180 1500000 InstaMineNuggetsCLASSA $MINEW, Coin Type: Litecoin Scrypt, Halving: 7500 Blocks, Initial Coins Per Block: 50 Coins, Target Spacing: 3 Minutes, Target Timespan: 33 Hours, Coinbase Maturity: 3 Blocks, Pre-Mine: 50%=750000 Coins, Max Coinbase: 750000 + 750000=1500000 Coins. $MINE has launched 1.5 million ($MINEW) (CLASS A CRYPTO COIN CONTRACTS) on 02-27-15. $MINEW will be convertible into $MINE at 2.00 $USD per coin with a 2 year Expiration Date from purchase or can be bought/sold freely like $MINE once exchange listed. $MINEW is a seperate 1.5 million capped “Crypto Coin Contract” blockchain based cryptocurrency that will trade freely alongside $MINE once listed. $MINEW can be bought and sold with “no lockup period”. ('h', 7500, 'b') aeromecoin_am.png DGW 1 block 2014-12 AM AeroME 2014-12-08 100% 60 asset issuance instrument, ipo cancelled 12000000 AeroME (AM), X13, 60 second block time, PoW + PoS Hybrid, 12 Million coins Premine in first few blocks, 100% Premine Block 1 = Premine (12 million coins for ICO), Block 2 > 100,000 = 0 coin rewards, in-case PoW mining is required to keep the chain moving, 12 hours min coin stake age, Unlimited max stake age, DGW re-targeting starts at block 20 frenchgeniuscoin_genius.png 2015-02 20 GENIUS FrenchGenius 2015-02-24 60 physical coin backed 100000 Coin name: French Genius (Angel), Coin ticker: GENIUS, Max mint: 100 000, Virtual coin true value: 1:1 with one physical Genius gold coin, Transaction fee: 0.0001 GENIUS (around 2 cents), PoS ~0.00001, Algo: SCRYPT metalcoin_metal.png 1 block 2014-11 METAL Metalcoin 2014-11-27 1000000 90 91315000 Total supply: ~91315000 Metal (will be less, PoS will kick in on block 22501), Algorithm: x11 (POW), POW Blocks: 50000 (will be less, PoS will kick in on block 22501), Difficulty retarget: every block, PoS: 5% a year, PoS start: block 22501, Premine: 1000000 METAL, Block time: 90 seconds, Block 1: premine monetacoin_monet.png 10 blocks 2015-10 120 MONET Moneta 2015-10-20 84000000 1 30 184000000 NAME: MONETA, Short Name: [MONET], algorythm: Scrypt, coin supply: 184 000 000, ColdBlackBox safe: 84 000 000, coin mining available: 100 000 000, block generation time: 30 seconds, block max size: 8 Mb, block reward: 1 MONET, start difficulty: 0.00024414, change difficulty: 10 blocks, coinbase maturity: 120 blocks, market available: 80 000 in 1 day, start price: 0.00050001 BTC 1 kw/h in the world, FIRST 5 000 blocks - reward 10 MONET, <10 000 blocks - reward 5 MONET, >10 000 blocks - reward 1 MONET bitzenycoin_zny.png DGW 2014-11 ZNY BitZeny 2014-11-08 250 90 250000000 Symbol ZNY, Algorithm Yescrypt(GlobalBoost-Yとは少し異なるため注意), Max Coins 250,000,000(2億5000万), Premine 0, Block time 90 秒, Difficulty DarkGravityWave3, Block reward 250, Halving every 500,000 blocks ('h', 500000, 'b') snakecoin_snake.png 2014-03 SNAKE Snakecoin 2014-03-08 5% 120 100000000000 bananacoin_banc.png NGW 2014-05 5 BANC Bananacoin 2014-05-19 50000 90 50000000000 ('h', 1, 'm') ripple_xrp.png 2013-05 XRP Ripple 2013-05-01 100000000000 Protocol, PoS winecoin_wnc.png 2014-07 WNC Winecoin 2014-07-26 10000 60 1000000 Block time: 1 minute Difficulty Adjustment: Each block Standard Dividends: 1% / year The minimum transaction fee: 0.0001WNC Transaction fees paid to miners Trade confirmations: 10, mature confirm: 500 Coin maturity: 8 hours p2p port: 9411,, pc port: 9412 Algorithm: scrypt Block Awards: 10000WNC, no halving POW phase 10000 blocks, after that switching to pure POS. Hybrid POW/POS WineCoin utilizes a hybrid POW / POS system. This allows for a fair distribution during the POW phase, and then switched to an energy efficient POS system. fullintegritycoin_fic.png 2014-10 FIC FullIntegritycoin 2014-10-28 1000 60 product range support 4204800000 Name: FULL INTEGRITY COIN, Abr. : FIC, Coins per Block: 1000, Block per day: 1440, Total coin in 1 day: 1440 * 1000 = 1,440,000 Coins, Total Block in 4 years= 2,102,400, Total Coins to generate: 4,204,800,000 echocoin_echo.png 1 block 2014-10 360 ECHO EchoCoin 2014-10-18 0.4% 40 60 relaunch 5000000 ('h', 43000, 'b') yovivirtualcoin_yovi.png 2015-07 YOVI Yovivirtualcoin 2015-07-13 100% 150 22830000 Ticker: YOVI, Algorithm: SHA256, Est 120-180 Seconds Per Block, Est Money Supply: 22.83m , POS: 5% / year wampumcoin_wam.png 100 blocks 2014-03 WAM Wampumcoin 2014-03-22 60 25000000000 scrypt ('r') starcoin_stc.png 2015-09 STC StarCoin 2015-09-12 110 60 2420000 Name: StarCoin, Short name: STC ★, Algorithm: X11, Total Coins: 2,420,000, Block Halving: 11000, Block Reward : 110 coins ('h', 11000, 'b') multigateway_mgw.png 2014-08 MGW NxtMultigateway 2014-08-20 asset:4551058913252105307 1000000 Multigateway (MGW) is a third party service developed on top of the NXT network that allows you to move cryptocurrencies in and out of the NXT Asset Exchange, the peer-to-peer exchange that offers decentralized trading with no trading fees. petrodollar_xpd.png 2014-03 XPD PetroDollar 1220000000 re-target using Kimoto's Gravity Well, block range coin rewards, and 1.2 billion total coins. 5 minute transaction time 288 blocks per day 120 blocks to mature Blocks 105,120 per year SHA-256D Kimoto's Gravity Well nanitecoin_xnan.png 2014-11 XNAN Nanite 2014-11-06 1% 60 1000000 Algorithm: X11, Block Time: 1 Minute, Block Reward;, 1 - 500 = 400 XNAN, 500 - 1500 = 350 XNAN, 1500 - 3000 = 150 XNAN, 3000 - 4001 = 225 XNAN, PoW Ends: Block 4001, PoS Starts: Block 3800, PoS Interest: 200%, Pre-mine: 1% (10,000 XNAN) craigscoin_craig.png 2014-09 CRAIG Craigscoin 2014-09-12 30 30000000 Name: CraigsCoin (CRAIG), 100% POS, Presale coins: 30000000 CRAIG, POS annual interest: 2%, Block time: 30 sec, The main idea of CraigsCoin is to provide the world with trustless, decentralized classified ads listing. Since everything is stored in the blockchain no entity will be able to delete or somehow edit an ad once it is posted., All 30000000 CraigsCoins are going to be sold via Bittrex ICO. Coin price will be determined at the end of the ICO (if there will be 10 BTC raised during the ICO it will mean that one CRAIG is worth 0.00000033 BTC) jesuscoin_god.png 2014-01 GOD Jesuscoin ferretcoin_fec.png 960 blocks / maximum difficulty retarget 123/55 (~+123%) and 55/123 (~-55%) respectively 2013-07 FEC Ferretcoin 2013-07-01 0 23 90 123000000 Scrypt-based cryptocoin. Yacoin clone. Ferret Coin - FerretCoin Abbreviation: FEC Algorithm: SCRYPT Date Founded: 7/1/2103 Total Coins: 123 Million Confirm Per Transaction: Re-Target Time: 960 Blocks Block Time: 1.5 Minutes Block Reward: 23 Coins Diff Adjustment: maximum difficulty retarget 123/55 (~+123%) and 55/123 (~-55%) respectively Premine:. optioncoin_option.png 2 blocks 2015-06 37 OPTION Optioncoin 2015-06-03 21000000 1 60 21001000 FREE DISTRIBUTION - 1,000,000 COINS (or 5%) will be given away to supporters of the coin for free!!!, Coin structure:, Block 1 - 20,000,000 ICO, Block 2 - 1,000,000 for free distribution, Block 2 - 1000 - 1 just to get the block chain working.OPTION is a PoS-based cryptocurrency., SLING is dependent upon libsecp256k1., Total POW: Blocks POW Reward: 137 SLING per Block POS Reward: 1.337 SLING Block Spacing: 60 Seconds Diff Retarget: 2 Blocks Maturity: 37 Blocks Stake Minimum Age: 8 Hours, Port: 30137 RPC Port: 30138 ircoin_irc.png 2014-04 IRC IRCoin celebration 55000000000 PoW Algorithm: Scrypt Difficulty Re-target : KGW Block Time: 60 seconds Initial Block Reward: 65000+ Coins per block Max Supply: 55.000.000.000 Coins bitcrystalcoin_btcry.png 90 days 2014-10 BTCRY BitCrystal 2014-10-31 500000000 15 9999999999999 Minimum Reward: 0.00000000000025 BTCRY, You can solo mining this coin and you can using this Coin in minecraft with my plugin!, You can use this plugin for all Cryptocoins and its integrated with shopsystem. , The configs should be self explained., Improved Design,Changed Retarget time and changed icons., 1 block is easy to get due to the weak level of difficulty., blocks halving after every 80640 blocks (14 days), The reward is halving every 14 days you get after 1 month the reward (500000000) divided through 4, after 2 month reward (500000000) divided through 8 and so on., When you reach the minimum of 25 coins then divided the minimum value of 25 coins / 100000000000000 so that the maximum minimum reward is 0.00000000000025 BTCRY pro Block. ('h', 14, 'd') bottlecaps_cap.png 2013-06 CAP Bottlecaps 2013-06-24 premined 47400000 re-target every 4 hours, POW / POS, 10 coins per block, and 47.4 million total coins. NVC clone, with a 0.25 diff start and quick 9 years inflation Abbreviation: CAP Algorithm: SCRYPT Date Founded: 6/1/2013 Total Coins: 48 Million Confirm Per Transaction: 5 Re-Target Time: 4 Hours Block Time: 1 Minute Block Reward: 10 Coins Diff Adjustment Starts at 0.25 with a 4hr Difficulty target time. Novacoin-based. 1% NVCS, SA 30/90, Coins 2.26 chikun_kun.png 2014-03 KUN Chikun 2014-03-10 10000000.0 goddesscoin_godd.png 2014-07 60 GODD Goddesscoin 2014-07-27 1.5% 153 60 3200000 Exchange, C-CEX Soon, Bittrex Soon, Poloniex Soon, Social networking sites, Website:Announced later, Twitter:, Email:GoddessCoins@gmail.com, Download, Wallet password:&#%@RFSD, Windows:, Mac:Reward 500 GODD, Linux:Reward 500 GODD, GitHub:, Specification, Total: 3.2 million, Coin Name: Goddess Coin [GODD], PoW Algorithm: X13, PoW + PoS hybrid, Confirmations for blocks to mature: 60, PoW Total Blocks: 10800 PoW blocks [7 days mining], 60 second block time, PoW block reward: 153 GODD, 60 confirmations for block to mature, POS starts at 10080 block, PoS Interest: 9%, Pre-mining:1.5% [high-quality Exchange and reward ], It is the first block, No IPO, RPC:19611, p2p:19612, Block Explorer, Awards 2000 GODD, Mining pool, We will select some to join the pool of high-quality,,, Faucet, Awards 2000 GODD, Paper Wallet, Later released, Reward, Translate: 100 GODD, propaganda: 100 GODD, Game: 1000 GODD, Business: 1000 GODD, For more information about GoddessCoin will be announced in the future walletcoin_wtc.png 5 minutes 2015-11 3 5 WTC WalletCoin 2015-11-24 0 10000 300 closed source 420000 Algorithm: Scrypt, Coin Name: WalletCoin, Coin Abbreviation: WTC, Address letter: W, RPC Port: 5354, P2P Port: 5353, Block reward: 10000 coins, Block halving: 210000 blocks, Total coin supply: 420000 coins, Coinbase maturity: 5 blocks, Number of confirmations: 3 blocks, Target timespan in minutes: 5 minutes, Target spacing in minutes: 5 minutes ('h', 210000, 'b') blitzcoin_bltz.png 2014-03 BLTZ BlitzCoin 1% 100000000 Retarget: Kimoto Gravity Well Block Time: 60 seconds Transaction: 3 confirmations Maturity: 30 confirmations Block Reward: 10,000 BLTZ Max Supply: 100,000,000 BLTZ Premine: 1% (1,000,000 BLTZ for IPO Hybrid) tattoocoin_ink.png 2014-05 INK Tattoocoin 50 21000000 50 coins per block and 21 million total coins. smileycoin_smly.png 5 days 2014-11 SMLY Smileycoin 2014-11-03 25000000000 10000 180 premined 50000000000 Initial block reward: 10000 SMLY., The coin is based on litecoin source. The source code is freely available., A total of 50*10^9 (50 billion) coins will eventually be generated, half during the pre-mine phase., Difficulty will be adjusted approximately every 5 days so as to obtain a new block every 3 minutes on average. This difficulty schedule aims to reduce the non-premined coins by 50% over 7 years. thirdgenerationcoin_tgc.png 2014-06 TGC ThirdGenerationCoin 2014-05-07 vardiff 180 21000000 We define the cryto-currency with 3 stage: The first stage is the POW(proof of work). Bitcoin The second stage is the POS (proof of stake). PPC The third stage is POB(proof of burn) reference (Slimcoin Whitepaper PDF),thanks for their great work.But we find some drawback with distribution model of slimcoin.So we try to improve this model.Make it more profit for the holders. 1. We reduce the amount to 21 Million 2. Make the mint confirm faster and less block confirm. 3. Improve the distribution model. 4. Make burn Coins more profit. Proof-of-work is used as a mean for generating the initial money supply.But if someone doesn’t have any rig, can not take part in the distribution. Specifications Uses Dcrypt algorithm, an algorithm made to be difficult to implement on an ASIC. Tri-Hybrid Blocks: Proof-of-Burn blocks, Proof-of-Stake blocks, Proof-of-Work blocks Block time is 3 minutes (180 seconds) Difficulty re-targets continuously Block Rewards: Proof-of-Burn blocks: max 200 coins Proof-of-Work blocks: max 20 coins Pos:7 days Block rewards decrease in value as the difficulty increases Total 21 million coins emoneycoin_ecash.png 2015-02 120 ECASH EMoney 2015-02-24 60 5000000 Algorithm - SHA256, Premine 1% Of Total PoW Blocks (40k Coins, approx 1% due to POS) Escrow Needed, POS Maturity - 24 hours, Block Target - 1 minute, 120 Blocks for Maturity nukecoin_nuke.png 2015-07 NUKE Nukecoin 2015-07-07 100% 30 2778196 Ticker: NUKE, Full-POS No-Mining, 50% yearly interest, minimum stake age: 1 hour, max supply 6,031,900, 100% ICO: Duration 3 days breakcoin_bre.png 2015-08 BRE Breakcoin 2015-08-24 100% 60 760000 BREAK COIN [BRE] | 760.000 coins | 50% POS | MASTERNODE | Only POS, Only 760, 000 COINS, 50% POS, 10,000 per masternode, Min. Staking age : 8 hours, Bonus blocks 5, 10 , 25, 100, 500, You need 10,000 to make any chance to get bonus blocks peppsycoin_psy.png 9.5 blocks 2015-10 PSY Peppsycoin 2015-10-17 9.5 30 block 0 at 2015-05-01 159595959 NAME: Peppsycoin, ALGO: Scrypt, SUFIX: PSY, Coin Target: 30s, Block reward: 95 PSY, Block Halfing: 788400 blocks, 273.75 days, Retarget: 9.5 blocks, Total: 159595959 ('h', 788400, 'b') neocortexcoin_neoc.png 2014-07 NEOC NeoCortexCcoin 2014-07-28 20000 NEOC 90 4893200 NeoCortex - Coins for the Mind, Introducing the latest mining algorithm - NeoScrypt - with thanks to GhostLander for the creation of the algorithm., Algo: NeoScrypt (profile 0x3), Ticker Code: NEOC, Coin Name: neocortex (neocortexd and neocortex.exe), Config Location: neocortex.conf, Block Rewards, 10K NEOC Premine (Originally For Bounties - see below) in Block 1 - But Unintentionally discarded due to blockchain fork., Blocks 2 - 99 - 2 NEOC, Blocks 100 - 499 - 20 NEOC, Blocks 500 - 20000 - 250 NEOC, Blocks 20000+ - 50 NEOC + Transaction Fees, The shorter plan for the block reward system, is to allow for an easy inclusion of PoS at some point after block 20,000 if the coin's value is sufficient to warrant it., Use the following nodes to help connect if you are having issues: (add to bottom of neocortex.conf and remove your peers.dat file), Code:, rpcport=25555, rpcuser=rpcuser, rpcpassword=rpcpass, addnode=107.170.90.55:36454, addnode=37.59.21.199:29030, addnode=107.170.165.93:58468, addnode=54.191.27.187:34347, Difficulty Algorithm, The same difficulty algorithm as in Monocle has been used for this coin., Mining, As this coin NeoScrypt - no current GPU miner will work for this coin. The CPU miner from GhostLander is also untested with this coin's exact workings. Bounties (see below) will be provided., Github Source: -, Windows Wallet: -, MAC Wallet: -, CPU Miner,, Code:, solo mine using -a altscrypt -o -u rpcuser -p rpcpass, GPU Miners - Required - See Bounty Section Below, Pool Python-Hash - Required - See Bounty Section Below, Pool - Required - See Bounty Section Below, NEOC to BTC on “FobCoin” -, Windows Wallet - 'ascii' - 1000 NEOC, MAC Wallet - Donations to 'w8' - 9i4QFZe4tvyFTcs1xna6Cky7YYk2SK1NcA, Working Tested CPU Miner, Working Tested GPU Miner AMD - 1000 NEOC Bounty, Working Tested GPU Miner nVidia - 1000 NEOC Bounty, Working Tested Rental Rigs Site - 500 NEOC Bounty, Python Hash For Pools - 500 NEOC Bounty, First 5 Pools - 500 NEOC Bounty nebulacoin_neb.png 1 block 2014-06 10 50 NEB Nebulacoin 2014-06-21 20000 60 1450000 Algorithm: X11 POW/POS Short: NEB, Total coin: ~1,450,000 with PoW, Block reward: Random Super Block, Reward Blocks, 50 NEB Up to 200 (Low Reward while mining pools gets setup and users discover Nebulacoin to create a fair launch), 100 NEB Up to 1000 , 300 NEB Up to 2500 , 150 NEB Up to 6000, PoS starts on block 6500, PoW last: 6500 ~approx 5 days, PoS generate after block 6400, Block time: 60 seconds, PoS Min age: 8 hours, PoS Max age: Unlimited, 20K NEB Premine for Bounties/Dev, Difficulty Readjusts every block, Confirmations on Transactions: 10, CoinBase Maturity: 50, Stake interest: 3% per year ratcoin_ratc.png 2014-05 RATC Ratcoin 大鼠币 jewelcoin_jewel.png 4 hours 2015-02 8 JEWEL JewelCoin 2015-02-19 1% 1 60 350000 Name JewelCoin, Coin Abbrevation JEWEL, POW/POS, POS 7%, Hash type x13 no ASIC, Premine 1%, Coins per block 1, Target spacing 1 min, Halving 50000 blocks, Target timespan 4 h, Coinbase maturity 8 blocks, Max coinbase 350.000 ('h', 50000, 'b') noocoin_noo.png 2014-12 NOO Noocoin 2014-12-12 100% 60 25000000 100% PoS, 3% annual interest, 4 hour stake. 100% presale credits_crd.png 60 minutes 2014-01 6 70 CRD Credits 2014-01-19 3.5% 33.7 30 337337337 re-target every 1 hour, 33.7 coins per block, and 337.3 million total coins. dvorakoin_dvk.png 2014-05 DVK Dvorakoin speccoin_spec.png 4 blocks 2015-06 44 SPEC Spec 2015-06-28 30% 1250 300 3000000000 SPEC is a lite version of Bitcoin using scrypt as a proof-of-work algorithm., Coin Name: SPEC, Algorithm: Scrypt, Coin Abbreviation: SPEC, Block Time: 5 minutes, Block Reward: 1250 SPEC, Retarget: 4 Blocks, Minted Coin Maturity: 44 blocks, Block Halving: 840000 blocks (~8 years), Total Coin Supply: 3,000,000,000 coins, ('h', 840000, 'b') liteshares_lts.png 2014-04 LTS Liteshares fuelcoin2_fc2.png 2014-04 FC2 FuelCoin 2014-04-29 50000000 60 premined, revamp of fuelcoin 100000000 NEW FuelCoin - FC2 Re-Distribution x11/pos 2% interest 2014-04-30, 03:15:32, Fully Mined for re-distribution, Ticker: FC2, Total Coins: 100 Million, Algo: x11, Interest Stake: 2% a3coin_a3c.png 2014-04 4 60 A3C A3 coin 1.5% 60 110000000 Pure POS. Block Rewards: 1st year nominal stake interest : 50% 2nd year nominal stake interest : 30% 3rd year nominal stake interest : 20% 4th year nominal stake interest : 15% 5-10 year nominal stake interest : 5% pa 10 years onward nominal stake interest: 3% pa guldencoin_nlg.png 2014-04 NLG Guldencoin 10% premined 1700000000 Algorithm: scrypt PoW KGW Transaction confirmations: 6 Total coins: 1700M Premine: 170M (+/- 10%) Starting diff: 0.00244 Block reward: 1000 NLG Block time: 150 seconds Retarget: 1440 blocks Reward halving interval: 840.000 blocks siacoin_sia.png 2015-05 SIA Siacoin 2015-05-14 600 storage-contract -1 Anybody with siacoins can rent storage from hosts on Sia. This is accomplish via ”smart” storage contracts stored on the Sia blockchain. The smart contract provides a payment to the host only after the host has kept the file for a predetermined amount of time. If the host loses the file, the host does not get paid. prismacoin_prisma.png 2014-07 PRISMA Prismacoin 2014-07-21 1.5% 60 6000000 COUNTDOWN TIMER, PrismaCoin was created by three people (web developer, designer and c++ programmer)., We are now just a small group but we have lot of projects and plans for the future so stay here and keep updated., With a short POW we give less time for the miners to dump their coins during the POW phase. The 10% interest on POS means more incentive for you to hold your coins. With this we can avoid a big dump., You will able to buy Web-designs, Logos, Banners, Pictures, directly from the owners with PrismaCoin., And you will able to hire web developers and designers with PrismaCoin too., We are now negotiating with some online stores in Europe about accepting PrismaCoin as payment method to buy electronic devices such as Personal Computer, Hardware, Television, Mobile Phone., PrismaCoin will be accepted by this store in at the end of August., 2014.07.28, PrismaCoin has good news! , From August 17th a newly created company based in Hungary will accept PrismaCoin., The Company's main profile is ASIC miners. We spent a lot of time with the owners and finally they will make business with us., They will open an Online Store on August 17th., You will able to pay with Bitcoin or with PrismaCoin!, If you decide to pay in Prisma we have a good news for you! Because the prices in PRISMA will be lower than in BTC. , They will announce a giveaway where somebody can win 2 Antminer S3s or 1 Dragon T1. We don't have more information about it. If we will have any information from the webshop we will announce it!, You will able to buy ASIC miners:, Antminer S3, Rockminer 32Gh/s, KNC Jupiter, TerraMiner II, Dragon T1, Butterfly Labs Jalapeno, And many others!, algorithm: x13, supply: 6.000.000 (6million) PRISMAcoin, reward: 1000 PRISMAcoin, blocktime: 60s, last POW block: 6011, min height: 3 hours, max height: 10 days, stake: 10%, Premine: 1.5% (90000) For development, giveaway, bounty, Blocks:, 1: premine, 1-100: 1 PRISMA, 101-6011: 1000 PRISMA internetcoin_ 2014-03 WWW Internetcoin re-target every 60 blocks, 50 coins per block, and 500 million total coins. naanayamcoin_nym.png 1 block linear 2013-11 100 NYM NaanaYaM 2013-11-07 0 1 10 -1 re-target every 10 seconds, POW / POS, 1 coins per block, and 100 trillion total coins. Just a PPC clone. Naanayam - Naanayam Abbreviation: NYM Algorithm: SHA-256 Date Founded: 11/7/2013 Total Coins: Unlimited Confirm Per Transaction: 100 For mint Re-Target Time: linear (per-block) Block Time: 10 seconds Block Reward: 1 Coin Diff Adjustment: linear (per-block) Premine:. Novacoin-based. vaultcoin_vlt.png 2014-04 VLT Vaultcoin 66000000.0 stepscoin_steps.png 2015-09 STEPS Steps 2015-09-07 1000000 60 25000000 Ticker: STEPS, ICO/FREE GIVEAWAY/POS, Initial Supply: 16 MILLION, 100% Proof of Stake, POS: See Reward System Below, Minimum Staking Age: 24 hours, Total Supply: ~25 MILLION, THE ICO WILL RUN IN 3 STAGES BY A TRUSTED ESCROW SERVICE FOLLOWED BY POS AND LISTINGS ON MAJOR EXCHANGES pkrcoin_pkr.png 2014-03 PKR PKRCoin 69000000.0 potatocoin_spud.png 2014-01 SPUD Potatocoin 2014-01-31 500 30 premined 625000000 Algorithm: Scrypt, Block Reward: 500 (decreasing by 0.0002 per block), Block Time: 30 seconds, Maximum Coins: 625 Million ('r', 0.0002, 'b') nofiatcoin_xnf.png 2014-01 XNF NoFiatcoin kimcoin_kmc.png 2014-04 KMC Kimcoin 2014-04-01 Multi-hash algo based cryptocoin with re-target every 500 blocks, CPU only mining, 100 coins per block, and 210 million total coins. fantomcoin_fcn.png 2014-05 FCN Fantomcoin 2014-05-06 60 -1 Fantomcoin is the first CryptoNote currency to support merged mining of different CryptoNote-based coins, allowing to receive not only the FCNs but also any other CryptoNote-based currency without extra mining effort. As a result, the cryptographic security of the coins is increased. This also allows fair resource distribution and stabilizes the cryptocurrency market through diversification. siriuscoin_ssc.png 2014-01 SSC Siriuscoin re-target every 4 blocks, 25 coins per block, and 10 billion total coins. teslax3coin_tesla.png 2014-03 TESLA Teslax3coin 2014-03-20 33 120 17500000 ('h', 'y') vaporcoin_vprc.png 30 min 2015-06 150 VPRC VaporCoin 2015-06-02 60 1000000 ALGO: SHA256, P2P port: 56631, RPC port: 57631, POW: 4000 blocks, POW rewards: , Block 1-1000 = 843 coins/block, Block 1001-2000 = 621 coins/block, Block 2001-3000 = 437 coins/block, Block 3001-4000 = 271 coins/block, POS: DPOS, Block 1-30000 = 14 coins/block, Block 30001-10000 = 7 coins/block, Block > 100001 = 3.5 coins/block elektroncoin_ekn.png 2015-04 5 30 EKN Elektron 2015-04-10 2000000 60 6000000 Currency name: ELEKTRON, Currency symbol: EKN, Total coins after PoW: 3 Millions circa , Coins available to mine: 1 Million in 15 Days after ICO ends, Coins available for pre-sale: 2 Millions , Confirmations for block maturity: 30, Block time: 60 seconds, Confirmations for transactions: 5 (really fast tx), Annual stake interest rate: 1,5%, RPC port: 23123, P2P port: 23121, Minimum stake age: 12 hours, No max stake age., Auto checkpoint Masternode system., Block reward scheme:, 1st Block: 2 Millions premine for pre-sale, 2nd to 149th Block: 0 block reward for testing and premine move to escrow, 150th to 1560th Block: 150 EKN per block, 1561th to 5881th Block: 30 EKN per block, 5882th to 21880th Block: 50 EKN per block, After block 21880 Elektron network will switch to full PoS. , No more PoW blocks will be accepted after block 21880., A dedicated EKN multipool will be released to coincide with the end of PoW phase. unattainiumv2coin_unat.png 1 block 2015-03 8 UNAT UnattainiumV2 2015-03-24 0.25 30 multi-algo, reconfig 1000000 Algo: SHA-256d, SCRYPT, SKEIN, QUBIT, GROESTL, Reward: 0.25 UNAT, Spacing: 30 seconds across all algorithms, Retarget: Every block, Diff algo: Digishield multi-algo fiftyshadesofcoin_fifty.png 5 hours 2015-02 5 FIFTY FiftyShadesofCoin 2015-02-27 2% 500 60 ignored 50000000 Coin properties Coin typeBitcoin (SHA256) Halving 50000 blocks Initial coins per block 500 coins Target spacing 1 min Target timespan 5 h Coinbase maturity 5 blocks Max coinbase 50.000.000 coins ('h', 50000, 'b') mycoin_myc.png 2014-03 MYC Mycoin 2014-03-29 100% 15 premined 880000000 darkcorgicoin_dcorg.png 20 minutes 2015-09 5 10 DCORG DarkCorgi 2015-09-12 0 10000 120 200000000 ƊαякƇσяgι, Ƈσιη Aввяєνιαтιση: ƊƇORƓ, Aℓgσяιтнм: Scяуρт, Ɓℓσcк Ƭιмє: 2 мιηυтєѕ, Ɗ郃ιcυℓту Rєтαяgєт: 20 мιηυтєѕ, Ɓℓσcк яєωαя∂: 10,000 cσιηѕ, Ɓℓσcк нαℓνιηg: 10,000 вℓσcкѕ, Ƭσтαℓ cσιη ѕυρρℓу: 200,000,000 cσιηѕ (200 Mιℓℓιση), Mιηιηg Mαтυяιту: 10 cσηƒιямαтισηѕ, Ƭχ Mαтυяιту: 5 Ƈσηƒιямαтισηѕ, RƤƇ Ƥσят: 4790, Ƥ2Ƥ Ƥσят: 4789 ('h', 10000, 'b') fbcoin_xfb.png 2014-06 XFB FBcoin 2014-06-06 15 100000000 Algorithm: Scrypt, Blockchain Security: POW/POS, Total Coins: 100Million, Block Reward: 100XFB, Block Time: 15 Seconds, Interest: 10% Annually, Min stake age: 3days, Max age: 100days quazarcoin_qcn.png 2014-05 60 QCN QuazarCoin 2014-05-08 120 18446744 Quazarcoin is the CryptoNote currency launched as a result of community discussions. It has a flatter emission curve and a clear launch for the wider community. QCN's developers focus on usability aspects of the currency. Its main contribution is the popularization of CryptoNote. longcoin_lng.png 2014-07 LNG LongCoin 2014-08-03 5% 60 2000000 Launch date: August 3rd, 2014 9:00 UTC, News, Aug 3rd, 2014 - Launch delay, There is a delay with the launch. We need to setup the pool and block explorer. The launch has been rescheduled to 23:00 UTC (12 hours delay), July 26th, 2014 - Presale started, You can now buy Longcoins before the launch date:, Wallets, Windows Wallet: , Source: , longcoin.conf, Code:, rpcuser=youruser, rpcpassword=yourpassword, listen=1, daemon=1, server=1, rpcallowip=127.0.0.1, addnode=66.45.238.253, Specifications, Symbol: LNG, Algo: X11 PoW, Block time: 60 sec, Difficulty retarget: every block, Total coins: 2,000,000, Premine: 100,000 - 5% of total supply (80,000 presale, 20,000 bounties, faucets, etc), Website,, Block rewards, Block 0 - 0 : 0 LNG, Block 1 - 1 : 100,000 LNG (presale, promotion), Blocks 2 - 360 : 0 LNG (6 hours), Blocks 361 - 43200 : 0.5 LNG (1 month), Blocks 43201 - 525600 : 0.25 LNG (1 year), Blocks 525601 - 2628000 : 0.125 LNG (5 years), Blocks 2628001 - 24763840: 0.0625 LNG (47 years), Bounties, First exchange - 5000 LNG, Dice Game - 1000 LNG, Faucet - 1000 LNG, New logo - 1000 LNG unlock.mk, Block explorer,, Pools, 1% fee, Exchanges, TBA mandarincoin_mac.png 2014-04 MAC Mandarincoin russiacoin_rc.png 2014-06 RC Russiacoin 2014-06-07 144000000 songcoin_sng.png 2014-09 120 SNG SongCoin 2014-09-30 2% 100 120 210240000 Songcoin - Altcoin Designed for Investing In Music, Algorithm : Scrypt - Litecoin Descendant, Number of Coin : 200 Million, Premine : 2%, Difficulty : 3.5 days, Coins on each block : 100 coins, Time between blocks : 2 min, blocks per day : 720 blocks, Total blocks in 4 years : 1.051.200 blocks, Total all coins : 210.240.000 coins, Maturity Coinbase : 100+20 fistbumpcoin_fist.png 2015-06 FIST FistBumpcoin 2015-06-01 60 67000000 FistBump : Algo X13 -PeerCoinFork, Proof of Work 24 Hours - 1440 Block, Proof of Stake; Min Age 60 Minutes, Proof of Stake Reward; 20%, Increasing Block Reward:, 0 - 480 each Block 20000[FIST] ~ 8 Hours, 481 - 960 each Block 40000[FIST] ~ 8 Hours, 961 - 1440 each Block 80000[FIST] ~ 8 Hours, After Block 1440 Full Proof of Stake, PoW Phase : 67 Million [FIST] guerillacoin_gue.png 2014-06 510 GUE Guerillacoin 2014-06-18 0 750 60 fast staking 15000000 Algo: X11 PoW/PoS, Total coins: 15 000 000, Block time: 1 minute, Block reward: 750, Block maturity: 510, POW Blocks: 10 000, FAST STAKE INTEREST after 8 hours but no more than 365 days., NO Premine cooperationandsolidaritycoin_cas.png 2014-12 CAS CooperationAndSolidarityCoin 2014-12-17 0 496 60 virused 4500000 Total coins : 4.5 million, PoW timespan : 7 days, Algo : x13, Anon : PoSA in beta now., Pre-mine : none, IPO : none, PoW block reward: 496, Mining will stop after 10080 blocks (7 days), Block time: 60 seconds, PoS 6%, Minimum age 1 hour, Max age 8 hours masterdogecoin_mdoge.png 2015-06 MDOGE Masterdoge 2015-06-28 1% 60 100000000 Algorithm: Scrypt, Ticker: MDOGE, Supply: 100 Million, PoW blocks: 20050, Reward: 5000 MDOGE, Block Target: 60 sec, PoS: 10%, Stake Min age: 6 hrs, Masternode Cost: 20000 MDOGE, Masternode reward: 1000 MDOGE + 5% of PoS, Premine: 1 Million (check Bounty list below) whistlecoin_wstl.png 2014-10 10 70 WSTL Whistlecoin 2014-10-07 0.5% 4728 60 20000000 Algo X11, POW/POS, POS interest 2%, Block 1 - premine 0.5%, Block Rewards 2- 7199 - 2764, Block Rewards 7200 - 4728, POS Starts at Block 6000, 60 Second Blocks, Confirms 70 for mined, Confirms 10 for Transactions, 20 Mil Coins in POW, Min Stake Age 2hrs, Max Stake Age 8hrs einsteinium_emc2.png 2014-03 EMC2 Einsteinium 300000000.0 re-target using Kimoto's Gravity Well, 1024 coins per block, and 299.7 million total coins. sphere_sph.png 2013-07 SPH Sphere 2013-07-23 auto re-target and 20 coins per block. whirlcoin_whrl.png 2014-07 WHRL Whirlcoin 2014-07-21 0 45 168000000 Whirlpool Algorithm, WhirlCoin uses the Whirlpool algorithm, and is the first to do so. This algorithm utilises the Merkle–Damgård construction based on AES, created by one of AES' creators .It has remained open cource and without patents since its release., The Wirlpool algorithm is fast and efficient. It has an average speed of 57 Mib/s, almost as fast as SHA-256 and much quicker than most algorithms used by cryptocurrencies. Whirlpool is also energy efficient, allowing both miners and clients to work to a higher capacity., WhirlCoin implements SHA-256 alongside Whirlpool for added security; Each hash has one round of Whirlpool and one of SHA-256., Specification:, Algorithm: WhirlPool, Money Supply: 168,000,000, Block Target: 45 Seconds, Rewards: WhirlCoin's rewards follow -sqrt(0.0026x)+100, a negative square root function starting at 100 and ending at 10, 4 years later., Why does WhirlCoin not have Pos?, PoS is a dying concept. It allows those the rich to easily amass more wealth, and it inspires people to dump the coin after the PoW phase., This view is becoming more and more popular, and a more in-depth look at the issue is avaliable here: Below is a plot of the first million or so block rewards for WhirlCoin. yinyangcoin_yinyang.png 2014-02 YINYANG YinYangCoin 36000000 36,000,000 Total Coin Supply Max 1,000,000 available for .0001 each week of competition scrypt - 0 block reward, 0 mining fees foxcoin_fox.png 60 seconds 2014-02 2 FOX FoxCoin 2014-01-17 0 250 60 premined 1000000000 Fox Coin - FoxCoin Abbreviation: FOX Algorithm: SCRYPT Date Founded: 1/17/2014 Total Coins: 1 Billion Confirm Per Transaction: 2 Confirms per TX Re-Target Time: 60 Seconds Block Time: 1 Minute Block Reward: 250 FOX decreases by 0.000125 FOX every block Diff Adjustment: Premine: 25k Coins. ('ra', 0.000125, 'b') nightmarecoin_nm.png 2014-08 NM Nightmarecoin 2014-08-15 2% 60 7140000 Nightmare 'NM', POW/POS, Algorithm - FRESH (Shavite, Simd, Echo, in 5 rounds of hashing), Total Coins - ~7.14 Million NM, 60 second block time, 11% POS Interest for the first 1 year then drops to 5.5%, 1% developer premine and 1% premine for bounties, NightSend addresses start with a lowercase 'n', Nightmare addresses start with a capital 'N', , BLOCK REWARDS, Block 1 is 1% NM developer premine & 1% premine for bounties, Block 2-160 are 0 NM for fair start, Block 161-14000 are 500 NM (~10 days POW), POS with 11% annual interest and then 5.5% annual interest after 1 year. faucetcoin_fec.png 2014-04 FEC Faucetcoin 100000000 Transaction fees are 0,010 Faucets: Available at launch Faucets limit: about 180 000 coins a day total 100 000 000 coins total Algo: scrypt Block reward: 1000 FEC Block time: 60 seconds Retarget: every 10 minutes Transaction: 3 confirmations RPC port 9939 P2P port 9949 globecoin_glb.png 9 blocks 2013-12 GLB Globe 2013-12-06 0 ((0, 2months, 10), 5) 60 a re-target every 9 blocks and 10 coins per block. Globe Coin - GlobeCoin Abbreviation: GLB Algorithm: SCRYPT Date Founded: 12/6/2013 Total Coins: Confirm Per Transaction: Re-Target Time: Block Time: 1 Minute Block Reward: 10 coins per block for first two months, 5 coins thereafter + 4% increase in block size annually -adjusted every day (every 1440 blocks) 20% tax on coins Diff Adjustment: 9 Blocks Premine:. ('ip', 4, 'y', 'd', 1, 'd', 1440, 'b', ('tax', 20)) coolingcoin_clc.png DGW 2014-07 10 CLC Coolingcoin 2014-07-10 0 1000 60 10000000 Coolingcoin coin has been created with the Algo NIST5 which uses, GRoSL, JH, BLAKE, KECCAK & SKEIN. These five Algos together make the best cooling rate for miners and is also energy efficiant.. Total coin: 10 Million Block time: 60 seconds Difficulty Retargets:DGW Decentralized MasterNode Network Confirmations on Transactions: 10 1.5% Premine -- For IPO & bounties, development. POW REWARD Total Block: 10 000 Block 1- 10 000: 1 000 After block 10 000 POS will start . 10% Annual PoS. ethereumdarkcoin_etd.png 2014-11 ETD EthereumDark 2014-11-05 60 1400000 Algorithm: X11, Ticker: ETD, Est Supply: 1.4 mill, Max POW Height: 14k, Block Time: 60s, POS Interest: 100%, Min stake Age: 1h, POS Starts at block 8200, POS % interest will half every 30 days, Future Features:, Stealth Addressing, Mixing with Nodes, In wallet Chat and more! qubitcoin_q2c.png 2014-01 Q2C QubitCoin 2014-01-12 248000000 2048 coins per block and 248 million total coins. whalecoin_whale.png 1 block 2014-08 10 WHALE Whalecoin 2014-08-01 1% 60 864000000 X11 - 5 Days POW / POS , 864,000,000 max coins , 120,000 coins per block , 60 sec block time , 7,200 blocks , 10 confirmations for blocks to mature , Re-target difficulty 1 block. , PoS 8.88% interest , PoS Min age: 8 hour , PoS Max age: unlimited , 1% premine for development sync_sync.png 2014-05 SYNC Sync 2014-05-17 0.5% 0.14 60 1000 katecoin_ktc.png 2015-08 KTC KateCoin 2015-08-05 200 4 60 closed source 42000000 Kate currency, KateCoin 2015/8/5 release, Total: 42 million, Symbol: KTC, RPC port 19765, P2P port 19766, sha256 algorithm, 60 seconds a block, the first 200 blocks each block reward a currency for testing the block chain. After each block award four coins, halving every 60 days, until the last 0.06 coin reward exhausted. Kate currency KTC official total population 481,484,329 official website ('h', 60, 'd') plebeiancoin_pleb.png 2014-08 PLEB Plebeiancoin 2014-08-09 2% 4200 67 23129316 Algo: X13, Symbol: PLEB, Block Time: 67 Seconds, Block Award: 4,200 PLEB, PoW Period: ~4.2 Days, Total Mineable PoW Money Supply*: 22,666,730 PLEB, Total Money Supply incl. Premine*: 23,129,316 PLEB, Last PoW Block: 5400, PoS Interest: 20% Annually, Coin PoS Age: Min. 1 day / Max. 7 days, Premine: 2% = 462,586 PLEB (1% for dev team fund, 1% for community bounties) hamburgercoin_hac.png 60 blocks 2014-02 3 20 HAC Hamburgercoin 2014-01-25 0 20 premined 500000000 Hamburger Coin - HamburgerCoin Abbreviation: HAC Algorithm: SCRYPT Date Founded: 1/25/2014 Total Coins: 500 Million Confirm Per Transaction: 3 for tx 30 for mint Re-Target Time: 60 Blocks Block Time: 20 seconds Block Reward: Random Diff Adjustment: Premine: 55%. judgecoin_judge.png 2014-08 110 JUDGE Judgecoin 2014-08-16 500 60 11450000 Total coins: About 11,450,000+ (After PoW ended), 500 Coins per PoW block (Ended), PoW Algorithm: X13 , PoW + PoS Hybrid , PoW Blocks: 38,640, Stake 6% Annually , 24hr Minimum coin age , 60 second block target , 110 confirmations for blocks to mature clockcoin_ckc.png 2014-01 CKC Clockcoin re-target each block, 60 coins per block, and 525 million total coins. minecoin_xmine.png 2015-08 XMINE Minecoin 2015-08-27 8400000 100 30 unmentioned premine 84000000 Symbol MINE, Pure scrypt Proof of Work coin, 84 Million coins available, Block reward 100 MINE halving every 210000 blocks, 30 seconds block time ('h', 210000, 'b') twerkcoin_twerk.png 2015-05 750 TWERK Twerk 2015-05-16 120 -1 TWERK = HIGH REWARD = SHA256 = 现在挖 TWERK, Hashing Algorithm: SHA256, Total POW: 4000 Blocks, POW Reward: , 1-10 = 100 TWERK (for anti-instamine), 10-499 = 4000000 TWERK, 500-999 = 2000000 TWERK, 1000-1499 = 1000000 TWERK, 1500-1999 = 500000 TWERK, 2000-2499 = 250000 TWERK, 2500-2999 = 125000 TWERK, 3000-3499 = 62500 TWERK, 3500-4000 = 31250 TWERK, POS Reward:, 1-19999 = 10000 TWERK, 20000-99999 = 25000 TWERK, 100000-199999 = 20000 TWERK, 200000-399999 = 15000 TWERK, 400000-infinity = 10000 TWERK, Block space: 2 Minutes, Maturity: 750 Blocks, Stake Min Age: 7 Hours, Stake Max Age: 24 Hours, Port: 32101, RPC Port: 32100, 20 MB maximum block size Can handle 100x more transactions per second than bitcoin ((20,000,000/1,000,000)*((10*60)/(2*60)), Magic Bytes: 0xa3 0xb2 0xf9 0xc7 lemoncoin_lmc.png 2014-03 LMC Lemoncoin 10000000000.0 helixcoin_hxc.png 2014-03 HXC Helixcoin 120000000.0 SHA-3 based cryptocoin with re-target using Kimoto's Gravity Well, 151 coins per block, and 120 million total coins. gongyicoin_gyc.png 5 blocks 2015-07 GYC Gongyicoin 2015-07-07 0 60 closed source 42000000 Currency name: public coin English gongyicoin abbreviated GYC public money was born on 7 July 2015, x13 algorithm based on and improved unique algorithms and professional mining machine graphics are not were mining, computer CPU can mining, fairer public coin issued a total of 42 million , without robbing and no pre dug. The overall interface of the wallet is updated (public money is committed to the development of social welfare)., R & D Author: GYc team, Core algorithm: Quark, Algorithm: x13 algorithm based on improved specific anti budget wallet algorithm, graphics and professional mining machine are not mining, computer CPU can mining, Total amount: 42000000 without pre excavation, Out of block speed: 60 seconds a block, an annual output of 5, Block Awards: the first annual output of second 10000000 third fourth 5000000 fifth 2000000, Block browser:, Official website: urodarkcoin_urod.png 2014-11 UROD UROdarkcoin 2014-11-09 60 2000000 algorithm = Scrypt block time = 60 seconds. Block 1 - 1,000,000 UROD, sold in ICO on Allcoin (20 BTC at ICO, 2000 satoshi per coin). Yearly POS reward will be 100% redkoin_rkn.png 2014-06 RKN Redkoin 2014-06-30 yes 120 0 RedKoin is planned to be a 100% proof of stake coin, except for the premine. The premine will be transparently distributed and monitored using an official faucet and block explorer. The public address of the faucet's wallet will be publicly available on launch. The faucet will follow a exponential decay model, paying out highly at launch and then tapering off quickly within a week. The exact amount of the premine is still up for debate. The initial planned amount was to be 50% but we decided that it was best left up to the community. bestcoin_best.png 2013-07 BEST Bestcoin 2013-07-31 premined re-target every 10 minutes or 2 blocks, 1 coins per block, and 1 million total coins. Abbreviation: BEST Algorithm: Date Founded: 7/1/2013 Total Coins: 1 Million Confirm Per Transaction: 6 Re-Target Time: 60 minutes Block Time: 5 Minutes Block Reward: 1 Coin Diff Adjustment 12 Blocks. sovereigncoin_svc.png 2014-03 SVC Sovereigncoin 600000000.0 1000 coins per block and 600 million total coins. cryptocoininccoin_cci.png 2014-09 CCI Cryptocoin Inc Shares 2014-09-25 180 clone of bottlecaps 100000 Cryptocoin Inc. is a registered Canadian company based in Barrie, Ontario, Canada. CCI is in the process of building out a medium-scale Bitcoin mining operation and offering its services as a virtual provider of cloud-based Bitcoin mining contracts in the near future. CCI Shares are a direct stake in ownership and profit sharing for Cryptocoin Inc. CCI Shares are Proof of Stake with null (0 coin) Proof of Work mining for transactional movements prior to full distribution. Total Shares: 100,000. Null/Zero-Coin blocks for Proof of Work to keep TX moving before staking period. 1% Per-Annum Stake with 15 Day Maturity and 30 Day Full Weight. Transactional Messaging, In-Client Extras zimstake_zs.png 2014-04 ZS Zimstake re-target every 1 block, 512 coins per block, and 70 million total coins. raincoin_rain.png 2015-09 20 RAIN Raincoin 2015-09-09 100% 120 2100000 Block time : 120 seconds, Mined: 2,100,000 ( for rain drop since day one ), 7,000 coin drop each hour for online wallet., Supply will be dry about 12.5 days., 10% POS yearly, Coin base Maturity : 20 fuguecoin_fc.png 2014-03 FC Fuguecoin 2014-03-17 50 experimental 84000000 ('h', 840000, 'b') cryptopowerscoin_cps.png 2014-05 CPS CryptoPowers X11 based cryptocoin with re-target every 5 hours, 10,000 random coins per block, and 43 million total coins. 2baccocoin_bacco.png 33 2015-06 2 BACCO 2bacco 2015-06-22 12% 512 120 81454545 Total coins = 81,454,545, 2bacco coin premine of 12% = 9,774,545.4 was needed to pay for current and ongoing costs for 2bacco coin, 2bacco coin specifications:, total coins = 81,454,545, Block Time = 2 minutes, Coins mature after 2 blocks, Target timespan = 33 hours, 512 coins mined per block to start, block size/amount halves each 70,000 blocks or approximately each 6 months, ('h', 70000, 'b') empyreancoin_epy.png 2015-04 170 EPY Empyrean 2015-04-29 50% 0.5 60 100000 Algo: Scrypt, Max coins: 100k, 100% POW, Difficulty halving: 20,000, Block reward: 0.5, TX fee: 0.00001, Coinbase Maturity: 170, 50% of the coin supply will be offered in an ICO format ('h', 20000) feathercoin_ftc.png 2013-04 FTC Feathercoin 羽毛币 2013-04-16 200 150 premined 336000000 re-target every 2016 blocks, 200 coins per block, and 336 million total coins. LTC clone with 4x more coins. starting diff 0 Feather Coin - Feathercoin Abbreviation: FTC Algorithm: SCRYPT Date Founded: 4/16/2013 Total Coins: 336 Million Confirm Per Transaction: Re-Target Time: Block Time: 2.5 Minutes Block Reward: 200 Coins Diff Adjustment. ulicoin_uli.png 2013-12 ULI Ulicoin 2013-12-22 4900 coins per block and 6 billion total coins. bluntcoin_blunt.png 1 block 2014-12 BLUNT Bluntcoin 2014-12-04 0.75% 60 7500000 ('h', 5000 'b') ufocoin_ufo.png 1 hour 2014-01 UFO UFO Coin 2014-02-11 5000 4000000000 ('h', 400000, 'b') gawpaycointm_xpy.png 1 block 2014-12 XPY Paycoin 2014-12-13 96% 49 60 premined 12500000 Name Paycoin™, Symbol/Tag XPY, Website, Github / Source Code, Forum, Hash Algorithm SHA-256, Proof-of-Work Scheme Proof-of-Work/Proof-of-Stake, Coins to be Issued 12,500,000, Block Time 1.00 minute(s), Block Reward 49.00 coins, Difficulty Retarget 1 blocks greedevolvedcoin_ge.png 30m 2015-07 5 GE GreedEvolved 2015-07-18 60 swap coin 50000000 hazmatcoin_hzt.png 2015-04 HZT HazMatCoin 2015-04-10 0 100 180 100000000. Ticker is HTZ Scrypt algorithm 100,000,000 total coins 100 Coin block reward 180 second target spacing 10% Yearly Interest No pre-mine No ICO or IPO 66coin_66.png 11 minutes 2014-02 66 66Coin 2014-01-25 0 0.000066 66 premined 66 re-target every 11 minutes, 0.00006600 coins per block, and 66 total coins. 66 Coin - 66Coin Abbreviation: 66 Algorithm: SCRYPT Date Founded: 1/25/2014 Total Coins: 66 Coins Confirm Per Transaction: Tx fees are 0.0000000066 Re-Target Time: 11 Minutes Block Time: 66 Seconds Block Reward: 0.00006600 Diff Adjustment: Premine: 4.54%. bawbee_baw.png 2014-04 BAW Bawbee 2100000 Litecoin/Scrypt Based Kimoto Gravity Well Pre-mining - 1 x Genesis Block, 2 x Checkpoint Blocks Limited supply, a tight fisted 2,100,000 coins only Symbol: BAW Reward: 2 BAWs / Block Block generation: 2 minutes Halves: Every 2 years vertcoin_vtc.png 3.5 days 2013-01 VTC 绿币 Vertcoin 2013-01-08 0 50 150 premined 84000000 Scrypt-Adaptive-Nfactor based cryptocoin with re-target every 3.5 days, 50 coins per block, and 84 million total coins. Vert Coin - VertCoin Abbreviation: VTC Algorithm: Scrypt-Adaptive-Nfactor Date Founded: 1/8/2013 Total Coins: 84 Million Confirm Per Transaction: Re-Target Time: 3.5 Days Block Time: 2.5 Minutes Block Reward: 50 Coins Halves every 4 years Diff Adjustment: Premine: 3 Blocks. ('h', 4, 'y') rebirthcoin_reb.png 1 block KGW 2014-09 50 REB Rebirthcoin 2014-09-05 0 60 3527369 Name: Rebirth [REB], Algo: SHA-256, Block time: 1MIN, Shares: (750,000) ICO + (401,339.2) Free Distribution 350,000 + Bounties 51,339 , ICO = Will be Host on C-Cex.com (Total 22.5 BTC (750,000 X 0.00003000 ) ), Free Distribution = (1 share = 100 REB ) (1 Per Person ), Bounties = ( 1 share = 100 REB ), Blocks, Blocks( 1):ICO+Distribution = 1,151,399.2 REB , Blocks(2-14401) = 0 REB (10 Days ) Total: 0 REB , Blocks(14402 to 172800) = 15 REB (110 Days ) Total: 2,375,970 REB, Blocks(172800-X) = 0.01 REB, Small inflation is to keep the network hashing, (120 days until reaching block 172800), Block Retarget: every Block, Total coin supply :3,527,369 REB + yearly inflation, Coins mature after 50 blocks, Bitcoin 0.9.2 Based , Fast Transactions vtorrentcoin_vtr.png 2014-12 50 VTR vTorrent 2014-12-11 3% 200 60 20000000 Total supply: 20 million Coinbase maturity: 50 blocks Target spacing: 60 seconds Premine: 3% Algorithm: Scrypt (Hybrid PoW/PoS) Stealth: All-time, impossible to trace transactions through the block chain. Proof of Work Last POW block: 97,000 Block reward: 200 VTR Proof of Stake PoS interest: 5% per year Min stake age: 6 hours Max stake age: 6 days darkblockcoin_dblo.png 2014-08 DBLO Darkblock 2014-08-23 0 5000 45 5000000 Darkblock Coin Built up by the community. Specifications:, X11, 5,000,000 PoW coins, 5,000 coin reward per block, 1,000 total PoW blocks, 45 second block time, No Premine, No IPO, No hidden premine (we will ask for code review from anyone) bitchcoin_btch.png 2014-09 720 BTCH Bitchcoin 2014-09-21 0 10 60 1000000 20 BTCH per block for 1st month. Then drops to 10 BTCH, Every 10K blocks, there is a bitch block with no reward, no premine, X11 because it's the one people bitch the least about and I already had a coin with X11 so it was easy, First “fair ninja”, No addnode shit needed, 720 block maturation time. (3 days), to ensure people bitch about not being able to dump granite_grn.png 2014-05 GRN Granite heelcoin_heel.png 2015-07 HEEL Heelcoin 2015-07-12 5000000 60 100000000 signaturecoin_sign.png 2014-08 3 50 SIGN Signaturecoin 2014-08-01 0 60 18000000 SignatureCoin is unique, in that it will have an anonymous wallet ready by launch time. There will not be “plans” to do so in the future, as with many other coins. The anonymous wallet features are freshly implemented by the development team from scratch according to the principles of Coinjoin, SignatureCoin comes with an innovative distribution model. Of the total PoW coins, 75% will be distributed *freely* to community members that meet certain criteria, on a daily basis and over a period of 90 days. The number of coins distributed depends on qualified people; for example, if 300 members qualify, they will each receive a daily distribution about 300 coins. (Actual values may change), The rest of the coins will be (1) mineable through PoW Mining (2) be used in mixers for anonymous transfers. The coins used in mixers are considered as “service” coins and will not be in circulation. These coins are required by mixers as “buffers” to encrypt the transfers with dynamic addresses. We will provide detailed percentages at launch time., The distribution will be sent to qualified members who actively keep their signature updated according to that given in OP and SignatureCoin website., The details of distribution will be posted at launch time. Distribution will be done through the script in an automatic fashion., SignatureCoin uses many other advanced features, such as separation of PoW and PoS. This leads to a super fast transaction time, usually less than 1 min in a PoS-established network (usually after 2-3 days of the launch). It also uses true randomness for superblocks, offering pleasant surprises to miners., Though with great features, we don't do IPOs to get money. Nor do devs reserve any pre-mines for them. Devs will, like many other people, receive the coins according to the standard distribution method., This is a community coin - the community is the true owner of the coin. All community members are expected to have creative ways to support, promote, help, and enhance the coin. We hope that this set-up will serve all in an exciting yet harmonious way., SignatureCoin (SIGN), X11, Anonymous wallet using CoinJoin technology (fresh implementation from scratch) - ** available at launch time ** !, Will have 3-5 dedicated mixers., PoW/PoS separate, 3 transaction confirmations (< 1min on average), 50 minted block confirmations, Total PoW coins will be about 13 millions and total coins including PoS will be about 18 millions., No IPO, No premine for devs. gamecryptocoin_gec.png 2014-06 240 GEC GameCryptocoin 2014-06-27 12.5% 250 60 80000000 GameCrypto peer-to-peer currency launched for use as payments processing system in games where some economic components can be added., It can be resource extraction, item crafting, hunting or in-game trading. Both online games can be a good candidate to integrate cryptocurrency donate/withdrawal., Algo: X11 PoW/PoS | Last PoW block: 40320 | Block time: 60 sec | Block reward: 250 Cryptos, no halves, Maturity: 240 blocks | Retarget: 12 minutes | PoW stage: 4 weeks, PoS starts 7 days after launch, Total emission: 80.000.000 Cryptos | Transaction confirm after: 10 blocks, Stake interest: 10% | PoS start block: 10240 | Min/max stake: 24 hour / 30 days, PREMINE 12.5% for IPO & BOUNTIES nemcoin_nem.png 2014-01 NEM NEM 新经币 2014-01-19 100% infrastructure 4000000000 2nd gen developed from bitcoin and NXT protocols NEM Coin. payzorcoin_pzr.png 2014-07 PZR Payzorcoin 2014-07-24 3% 80 1500000 The coin with real use., Ticker: PZR, Algorithm : scrypt, Premine : 3% (to cover costs of development and bounties), Block time : 80 seconds, PoS: 8% per year, Block reward: 50, PoW Time: 10 days, Max coins: 1,5 million, Twitter:, Website:, IRC freenode: #payzor, Spanish version:, Chinese version:, Italian version:, Polish version:, Russian version:, WHATS NEW THAT PayzorCoin BRINGS ?, Lamassu has put up their code as open source, we decided to fork it so the ATM machines will be able to trade BTC/currency + PZR/currency + PZR/BTC., This way our coin will have use as a real cash exchangable currency, not just other altocoin that changes few things in the code., We`ve already contacted Lamassu to speak with them about implementation of code without giving it away as Open Source - we do not want other altocoins to just copy what we prepared., Currently we have place in one of biggest malls to put our ATM at in Berlin/Germany and after launching it, we already have 2 other places that already own Lamassu confirmed they will implement code if it works - 1 in USA and 1 in Asia., After sucessfuly launching code for Lamassu we plan to add our currency to other Bitcoin ATMs. This way we can became real force in decentralised world of cryptocurrency., Proof of ATM:, Proof of Lamassu accepting forking the code:, WALLETS, source and windows qt., - win wallet in /resources/, sources in proper folders - bundle of sources & qts., - MAC OSX wallet, Code:, [2014/07/22 20:38:51]server=1, listen=1, rpcuser=user, rpcpassword=pass, rpcport=14621, addnode=162.218.92.33, addnode=167.160.94.200, addnode=173.78.49.31:14622, addnode=88.213.221.77, POOLS,,,,,,, multipool,, - multipool and own port, EXCHANGES,,, block explorer, ARTICLES/MERCHANTS/GAMES, article, article, GAME, FAUCET, PAYZOR COIN BOUNTIES, Logo, Reddit Tipbot 200 PZR, Twitter Tipbot 200 PZR, Explorer claimed, Pools, P2p-pool - 100 PZR, Games - 20 PZR each, Faucet - 50 PZR, Video - claimed, Articles on Blogs - 10 PZR each, Translations - Russian, Chinesse, Spanish, Polish, French, Andoir Wallet 500 PZR, OSX Qt, Lamassu GUI Translations - Chinesse, French, Polish, Spanish, Russian - all claimed nectarcoin_sweet.png 2014-10 SWEET Nectarcoin 2014-10-05 100% 60 12500000 12.5 million ico coins, POS framework that will include new POP* (Proof of Promotion) adoption incentive system that rewards users for sending coins to a new zero-balance address distant from their own source address. Tx fees: market-based. Block generation: 1 minute. Stake minimum age: to be determined. Variable Network-Stake-Dependent Interest: up to 30% a year. Proof of Promotion: Award of a set amount of coins for each send to a new zero-balance address to which the coin holder is not historically, closely associated aurumcoin_au.png 5 hours 2014-09 2 AU AurumCoin 2014-09-23 0.9% 1 60 300000 Coin type Bitcoin (SHA256), Halving 150000 blocks, Initial coins per block 1 coins, Target spacing 1 min, Target timespan 5 h, Coinbase maturity 2 blocks, Premine 0.9 % For Exchange,pool,advertisement, Max coinbase 300000 coins ('h', 150000, 'b') bittorcoin_bt.png 2014-07 BT BitToCcoin 2014-07-28 0.3% 120 1000000 We present BitTor (BT), this is a project we've been working on for quite a while now. The objective is to have the ultimate Tor based privacy coin. This coin will feature Tor technology at launch and there a lot more privacy enhancement features being cooked behind the scenes. Visit this thread or our Twitter regularly for more info., 2 minute blocks, 1M total BitTors, 1 coin per generated block for the first 600k blocks, 0,5 between blocks 600k and 1.2M, 0.25 between blocks 1.2M and 1.6M., Difficulty retargets after every block, Algorithm: X13, 720 blocks per day, 0.3% Premine (With a value equal to the first 3000 blocks), Launching 28/07/2014, 7pm GMT linearcoin_lnc.png 2014-06 LNC Linearcoin 2014-06-21 <0.001% 300 25025025 No IPO, No POS, < 0.001% Premine, No worries!, POW Algorithm: X11, Total Coins: 25,025,025 Coins, Block Time: 5 Minutes, Confirmations: 10, Block Maturity: 80, Difficulty Re-target Time: Every Block (KimotoGravityWell), Halving Rate: Decreases 0.00005 Coins every block, Mining Window: ~9.51 years reducinglin furrycoin_fur.png 2014-05 FUR FurryCoin re-target every 60 blocks, 5000 coins per block, and 50 million total coins. halcyoncoin_hal.png 2014-08 HAL Halcyoncoin 2014-08-16 0 60 2294000 9% POS Interest per Year, Min Stake Age 12 Hours, Max Stake Age 30 Days, Total coins: 2294000 - Will Be Closer To 2m With POS, POS Begin At Block 2000 carboncreditcoin_unit.png 2015-03 UNIT CarbonCredit 2015-03-19 100% 60 1000000000 Name: CarbonCredit , Symbol: UNIT, Algorithm: Scrypt (POS), Interest: 2% / year, Total supply: ~16,800,000,000 UNIT, First Block: 1,000,000,000 UNIT, 100,000,000 UNIT for “Hidden” Company (don't sell before have debit card), 10,000,000 UNIT for ICO @ 100 satoshi, 290,000,000 UNIT for ICO @ 200 satoshi (Bittrex), 600,000,000 UNIT for Bounty/Activity gollumcoin_glm.png KGW 2014-03 GLM Gollumcoin 50 17000000 re-target using Kimoto's Gravity Well, 50 coins per block, and 17 million total coins. vendettacoin_vac.png 2014-02 VAC Vendettacoin Scrypt-jane based cryptocoin with re-target every week, block range coin rewards, and 21 million total coins. autocoin_aut.png 2014-03 AUT Autocoin 50 coins per block and 21 million total coins. eagscurrencycoin_eags.png 2014-12 EAGS EAGScurrency 2014-12-03 6000000 60 20445500 PREMINE: 0% (No Premine), NAME: EAGS Currency, TICKER: EAGS, ALGO: SHA256, TOTAL COINS: 20,445,500, POW COINS: 10,445,500, Miners: 40% of the 10 millions + the 445,500, XDE ICO INVETORS AND OTHERS: 60% of the 10 millions, MINING PHASE: 5 to 10 days (May change), FAIR LAUNCH: 0 Block Reward after 1st Block - Up to 400 blocks, BLOCK REWARD: none yet (Will be updated), POS COINS: 10 millions, POS INTEREST: 5%, MIN AGE: 24HRs, MAX AGE: 90 days (Can be changed) bitpeso_btp.png 5 hours 2014-04 BTP BitPeso 2014-03-15 1000 300 200000000 1000 coins per block and 200 million total coins. knightcoin_kgc.png 2014-05 KGC Knightcoin 2014-05-12 10% 400 60 250000000 The difficulty of adjustment: 1440 adjust again. First pieces of 25000000 coins. Second the beginning of each block of 400 dollars until the end of 225000000 coins. ('h', 1440, 'b') harmonycoin_hmy.png 2014-01 HMY Harmonycoin 2014-01-01 400 60 7032000 ipo, anon Algorithm: X13, 7,032,000 Total Coins , POW/POS & Anon, 6% Annual Interest once coin becomes Pure POS, Block Height: 60 Seconds, Confirmations: 6 , POW blocks: 10080, Block Reward: 400 coins per block, .5% of every POW block will go to a charity fund sauroncoin_sau.png 2013-08 SAU SauronRings trickydickyfunbillcoin_tdfb.png 2015-04 200 TDFB TrickyDickyFunBill 2015-04-28 0 60 1500000 Today is a great day, we proudly announce to you all the birth of TDFB coin - a decentralized web infrastructure. TDFB Foundation is made up of a group of people. TDFB is not the same promise-full cryptocoin, we might bring many of the coolest features and functionalities, such as POW-POS phase (stake multiplier! WoW!) and possibly a web wallet. The TDFB coin is just another example of the versatility of altcoins. Ticker: TDFB No Premine, No IPO/ICO. Block time: 60 seconds Coinbase maturity: 200 blocks min stake age: 8hrs max stake age: 36hrs Algo: SHA256d recyclingcoin_rex.png 2014-05 REX Recyclingcoin 2014-05-19 cypherfunk_funk.png 2014-03 FUNK Cypherfunks 49300000000.0 re-target using Kimoto's Gravity Well, time based coin rewards, and 50 billion total coins. mcoin_m.png 1 block 2015-01 M Mcoin 2015-01-01 80 150 closed source 2000000000 MCOIN SPECIFICATIONS, X15 Algorithm, Block discovery: every 2.5 minutes, 80 coins per block reward, Total amount 2 Billion coins, Block retarget time on every block, Proof of Work, Proof of Stake reward 8%, (and decreasing every year by 1% until reaching 1% yearly) tradecoin_tdc.png 2013-06 TDC TradeCoin 2013-06-25 100% Scrypt-based fully mined cryptocoin with 147 million total coins. 100% premine LTC clone experiment. Dying. woolfcoin_woo.png 2015-05 WOO Woolfcoin 2015-05-28 0 50 120 2100000 Name: WOOLFCOIN, Symbol: WOO, Algorithm: SCRYPT,POW, Block Reward: 50, Block Reward Halving Rate: 21000, Block time: 120 sec, Premine: 0% !!!!, Total supply: 2 100 000 ('h', 21000, 'b') ultracoin_utc.png 10 blocks 2014-02 5 50 UTC UltraCoin 2014-02-01 2% 30 60 100000000 Scrypt-jane based cryptocoin with re-target max 200% per 30 minutes, 50 coins per block, and 100 million total coins. ('h', 4000000, 'b') h2ocoin_h2o.png 2014-03 H2O H2Ocoin 2000000000 The specs of the coin are the following: Scrypt -N Adaptive KGW ( Kimoto's Gravity Well ) Total coin supply: 2 billions First month blocks will give per 2000 H2O Drops (Coins) so we attract miners. While, after 1 month, it will go to 1600 coins. novacoin_nvc.png 2013-02 NVC Novacoin 2013-02-09 2000000000 Scrypt hashing[like LTC], proof of stake [like PPC]. Novacoin-based. established 100% NVCS (1.3%), SA 30/90, Coins 95 hispacoin_hpc.png 2014-03 HPC Hispacoin 50 50000000 50 coins per block and 50 million total coins. templecoin_tpc.png 2014-03 TPC Templecoin 99.81% premined 2718281828.45904523 shekelcoin_she.png 2013-11 SHE Shekelcoin 2013-11-27 re-target every 2016 blocks, POW, 50 coins per block, and 21 billion total coins. onyxcoin_onyx.png 2014-07 ONYX OnyxCoin 2014-07-30 1% 60 15000248 A new precious gem of X13 cryptocurrency, ONYX., RPC Port: 51996, P2P Port: 50995, Algorithm: X13 POW/POS starts on block 1,500,000 (2.8 years approx.), Ticker: ONYX, Max Proof-of-Work Coins: Approximately ~15,000,248 ONYX, 5% Proof-of-Stake Yearly Interest, 1% Premine, Block 1 is 1% Premine; Block 2-250 are 1 ONYX #low reward to prevent an instamine; Block 251-1500000 are 10 ONYX; PoW Ends on Block 1,500,000 (approx. ~2.8 years), 60 Second Blocks, Difficulty adjusts per block, Block Explorer:, Block Explorer with Richlist Coming Soon, DOWNLOAD ONYX, GitHub Source Code:, Windows onyxcoin-qt:, Mirror #1:, Mac onyxcoin-qt:, Linux onyxcoin-qt:, Mining Pools, We need pool operators to add us!,,,,,,,,,,, Exchanges, We will be working to get Onyxcoin on exchanges!, Block Explorer available in wallet, online version needed!, Twitter:, IRC Chat Channel is #onyxcoin, ONYX Game:, ONYX Faucet:, Roadmap, Where is Onyxcoin going?, We are a group of anonymous developers that want to better cryptocurrency, with Onyxcoin we bring to you fixed block rewards, X13 proof-of-work mining, and proof-of-stake interest after 2.8 years. Onyxcoin features many of the latest features found in many other popular coins, we will strive as developers to create a very stable, fully functional, feature packed cryptocurrency. We do have plans to add anonymous features in the future for users to have the option to create anonymous transactions in the blockchain. A date and whitepaper will be available in time. Want to join us? PM onyxdev on bitcointalk. This thread and graphics will be updated soon., 7/30/2014 - ninja launch, Next date/timeline to be announced., The premine address is: oMpC9Y6ZQpMJ18aoTPFq1G3yNaSqrrQjKZ, For transparency and monitoring. It will not be dumped, we will be using the premine for future bounties and contests. riecoin_ric.png 12 hours 2014-02 RIC Riecoin 2014-02-03 0 50 84000000.0 Bitcoin fork using prime constellations as PoW with re-target every 288 blocks, 50 coins per block, and 84 million total coins. Riecoin Foundation is online: ('h', 840000, 'b') hongketocoin_hkc.png 5 hours 2014-05 50 HKC Hongketocoin 2014-05-17 60 600000000 The Coin For Fighters! Remix Version ('h', 50000, 'b') blackmarketcoin_blm.png 2015-05 200 BLM BlackMarket 2015-05-16 60 1000000 Algorithm: X15, Block time: 60 seconds, Block reward: 1 - ICO, 2-99 = 0, 100-1100 = 10, 1101-1500 = 20, 1501-2000 = 25, 2001-2500 = 20, 2501-3000 = 15, PoW Supply: ~100K, Last PoW block: 3000, Maturity: 200 Blocks, PoS Reward: 1, PoS Min Stake Age: 6 hours, PoS Starts: after POW sentarocoin_sen.png 2015-06 SEN Sentaro 2015-06-13 100 60 990000 Algorithm: Scrypt, Ticker: SEN, PoW blocks: 10000, Reward: 100 SEN ( starting from block 100 ), Block Target: 60 sec, PoS: 1%, Stake Min age: 6 hrs, Masternode: 5000 SEN , Masternode reward: 1 SEN + 10% of PoS, Anti-instamine: 0 reward first 100 blocks stalwartbucks_sbx.png 2014-01 SBX Stalwartbucks parkbytecoin_pkb.png 2015-05 PKB ParkByte 2015-05-07 2000000 90 60 25000000 Coin Name: ParkByte, Algo: SHA256, Ticket: grifcoin_gfc.png 2 days 2014-06 GFC Grifcoin 2014-06-02 120 42000000 Initial Coin per Block: 40, Halving Rate: 525600 blocks (about every 4 years), Block Generation: 2 minutes, Readjust Difficulty: 2 days, Maximum Coins: 42,000,000, Algorithm: Scrypt, If you are a fan of Red vs Blue on Roosterteeth, then this cryptocurrency is for you! Based on a character known as Grif, Grifcoin is a new altcoin for the true RvB fans. If you haven't seen RvB, check it out on Roosterteeth (). Remember to start on Season 1! This is still very early in development. I am just releasing to those who want to get started early. In order to establish some sort of value, I am going to back Grifcoin: 0.01 BTC for every 5,000 GFC. Being that the average subpar laptop can (as of now) mine Grifcoin about 1,500 per night, I think this is pretty generous. Basically, if you donate some of your computing power to the network and manage to mine 5,000 Grifcoins, I will send you 0.01 Bitcoins. socialkryptcoin_sokry.png 1 block 2014-10 SOKRY SocialKrypt 2014-10-13 120 3040000 Algo: x13, Block Time: 2 Minutes, Diff retarget: every block, Block reward: , First block mined for ICO funds: 2'500'000 SOKRY (Unsold coins will be destroyed), 2 - 3600 blocks = 0 SOKRY for ICO duration and fair start., 3601 - 7200 blocks = 100 SOKRY per block for a total of 360'000 SOKRY (ca.), 7201 - 10080 blocks = 50 SOKRY per block for a total of 143950 SOKRY, Total coins if ICO will be sold entirely: 3'040'000 SOKRY, Pos Interest: 5% crypticoin_xcr.png 2014-06 XCR Crypti 和氏币 2014-06-16 60 infrastructure 1000000 Crypti is a 2nd generation crypto-currency designed from the ground up. It's built to solve the biggest problem with current crypto-currencies, a lack of purchase motivation. Crypti is being built from scratch, complete with a new Elliptic curve algorithm, relying on No previous crypto-currencies code. It uses a combination of 3 radically new proof-of-stake algorithms making it a first of it's kind. It's being developed in lightweight Node.js, and can be ran on any device out there, including mobile devices. vastcoin_vast.png 1 block 2014-07 7 70 VAST Vastcoin 2014-07-21 0 4000 30 3937500 Name: VastCoin, Symbol: VAST, Algo: X11, POW/POS hybrid, POW: 25 hours of mining, Block time: 30sec, Block reward: 4000 VAST, halving every 500 blocks, Last POW block: 3000, POS starts at block 1500, Block retarget: every block, POS interest: 5% per year, Coins stake after 15 hours, Confirmations per transaction 7, Coins mature after 70 blocks, Premine: 0 (0%) ('h', 500, 'b') clustercoin_clstr.png 1 block 2014-08 5 100 CLSTR Clustercoin 2014-08-23 7000000 60 36610000 Coin name: ClusterCoin Symbol: CLSTR Algorithm: SHA-256 ICO coins: 7,000,000 Block rewards: 1st block: 7,000,000 CLSTR (ICO coins) 2 - 13,000 blocks: 0 CLSTR (no mining reward during ICO) 13,001 - 10,000,000 blocks: 30 3 CLSTR Confirmations for blocks to mature: 100 Re-target difficulty each block Total CLSTR coins generated during pow: 36,610,000 CLSTR Block time: 60 seconds Transaction confirmations: 5 legendarycoin_lgd.png 2014-03 6 50 LGD Legendary Coin 2014-03-25 3% 7 120 10000000 ('h', 64800, 'b') gascoin_gas.png 2013-07 GAS Gascoin 2013-07-27 dynamic retarget, POW, and dynamic block reward. keycoin_key.png 2014-07 KEY Keycoin 2014-07-15 0 60 1000000 Distribution: POW/POS, Algorithm: x13, Total Coins: Approx 1,000,000 in POW., No premine., Fair blocks for Community Bonus, Blocks 1601-2000 Larger Rewards Than First., Block Reward: Block 1 - 400: 500 KEY, Block 401 - 700: 450 KEY, Block 701 - 1000: 350 KEY, Block 1001 - 1300: 300 KEY, Block 1301 - 1600: 250 KEY, Block 1601 - 2000: 550 KEY, Block 2001 - 2100: 150 KEY, Block 2101 - 3000: 100 KEY, Block 3001 - 4000: 50 KEY, Block 4001 - 4500: 1 KEY, POS Only After Block 4500, Block Time: 60 Seconds, Yearly POS Interest: 20%, 50 Minted Block Confirmations, p2p port: 37941, rpc port: 37942, Website: Coming Soon, Downloads: Source:, Windows Wallet: Mega, Nodes: addnode=188.226.184.232, addnode=162.243.122.97, addnode=107.170.153.13, addnode=95.85.62.12, addnode=178.62.54.145, Exchanges: Bittrex, Cryptsy, Pools: SuchPool, MineP.it, P2PoolCoin.com - EU Server | Canada Server, POOL.MN, dedicatedpool, suprnova.cc, Chainz: Now on CoinMarketCap also. zhaocaibicoin_zcc.png 2012-08 ZCC ZcCoin 招财币 2012-08-05 0 1000000000 Scrypt-jane based cryptocoin with POW / POS, 50 coins per block, and 1 billion total coins. Yacoin clone. ZC Coin - ZCCoin Abbreviation: ZCC Algorithm: SCRYPT Date Founded: 8/5/3012 Total Coins: 1 Billion Confirm Per Transaction: Re-Target Time: Block Time: Block Reward: 50 Per block/Varies Diff Adjustment: Premine:. Novacoin-based. (50, 'v') gramcoin_gram.png 2015-04 GRAM Gramcoin 2015-04-22 3% 220 120 65360000 Name: GRAMCOIN, Symbol: GRAM, Algorithm: SHA256,POW, Block Reward: 220, Block Reward Halving Rate: 144000, Block time: 120 sec, Difficulty retarget: Dark Gravity Wave, Premine: 3% (only for coin development), Total supply: 65360000 ('h', 144000, 'b') topcoin_top.png 2014-02 TOP TopCoin 11520000000 tripcoin_trip.png 1 block 2015-08 10 240 TRIP TripCoin 2015-08-24 60 closed source 100000000 Ticker: TRIP, FullName: TRIPCoin, Total coins: 100 000 000, Algo: 100% POS - scrypt, Annual Interest: 5%, Block time: 60 seconds, Min. transaction fee: 0.0001 TRIP, Confirmations: 10, maturity: 240, Min stake age: 4 hours, no max age * Difficulty retarget: every block, Blocks: 1 - 40mln TRIP, 2-2880 - 0.1mln TRAVEL. The value of a coin is fixed in relation to the US dollar. 1 TRIP = 1$ vocalcoin_vocal.png 2014-11 VOCAL Vocal 2014-11-06 100% 60 28000000 There will be initially 28 million vocal coins to be used on the entire platform., 100% Proof of Stake, Initial Coin Supply: 28 Million coins, Stake Interest: 4%, Minimum Stake: 6 hours energycoin_enrg.png 2014-05 ENRG Energycoin 2014-05-15 30 110000000 Free IPO lifeextensioncoin_ext.png 2015-06 EXT Life Extension 2015-06-27 100% 60 5000000 LAUNCH : June 27, TICKER: EXT, Algo: QUBIT, Max Supply: 5 Millions, FULL PoS: 1% yearly, Minimun Stake Age: 1 hour, IDO: 100% (5 Millions) xagoncoin_xagon.png 1 day 2015-11 XAGON Xagon 2015-11-12 40000000 60 60 200000000 XAGON [~SLATE~] Distributed Identity, T2 Alpha, Abstract Commerce ('h', 1333334 'b') solidcoin2_sc2.png 2011-10 SC2 SolidCoin2 2011-10-10 vaginacoin_vag.png 2013-05 VAG Vaginacoin 2013-05-31 retarget every 690 blocks, 420 coins per block, and 69 million total coins. mastermintcoin_mm.png 2015-11 50 MM MasterMint 2015-11-17 150000000 60 1500000000 MasterMint, Features, 1500000000 Total supply, 50 % PoS annual, Masternodes 50% of PoS reward, 60 seconds block time, 1 hour minimum stake, 50 confirms to mature greekcoin_grk.png 2014-06 GRK Greekcoin 2014-06-06 30 100000000 X11 - POW/POS coin, Number of coins: 100,000,000, POW blocks : 10,000, POS : 5% yearly, Block Time : 30 sec, Confirmation : 6, Pre-mine:1% taxitokenscoin_hack.png 168 hours 2015-01 10 HACK TaxiTokens 2015-01-01 1% 50 600 21000000 Taxi Tokens ($HACK) is a new decentralized cryptocurrency that will be used “exclusively” by NYC for-hire vehicles. EX: Medallion Drivers, FHV Drivers and Others. The New York City Taxi and Limousine Commission currently is nation’s “largest” and most active taxi and for-hire vehicle regulator with a current Driver License count of 179,873 qualified drivers servicing an average of 241 million passengers a year generating over $11 billion annually. ($HACK) was designed by a thriving New York City taxi cab owner/operator who is dedicated “initially” on attracting NYC merchant/user adoption through many industry contacts and peers developed over a 20 year period of employment in the transportation industry. In 2013, the Taxi and Limousine Commission (TLC) realized a number of achievements in keeping with its mission of ensuring that New Yorkers and visitors to the city have access to taxicabs and other for-hire ground transportation that are safe, efficient, sufficiently plentiful, and provide a good passenger experience. However, NYC for-hire drivers and transportation owner/operators are always looking for new ways to generate income with the rising operating costs incurred to remain afloat in this competitive industry. ($HACK) was created to encourage the adoption of cryptocurrency into this sector while creating a new and innovative payment platform and revenue stream that affords both driver(s) and passenger(s) the opportunity of utilizing our secure network as a preferred method of payment. ($HACK) eliminates typical merchant provider transactions fees allowing users to save money while increasing their overall bottom line profits., Algorithm: Scrypt Nr. Coins: 21 Million Coins., Coin Type Litecoin (Scrypt), Halving 210000 Blocks, Initial Coins Per Block 50 Coins, Target Spacing 10 min, Target Timespan 168 h, Coinbase Maturity 10 Blocks, Premine 210,000 Coins, Max Coinbase 21000000 + 0 = 21000000 Coins ('h', 210000, 'b') skycoin_sky.png 700 blocks 2013-12 SKY Skycoin 2013-12-22 0 15 120 4000000 Sky Coin - Skycoin Abbreviation: SKY Algorithm: SCRYPT Date Founded: 12/22/2013 Total Coins: 4 Million Confirm Per Transaction: Re-Target Time: Block Time: 2 Minutes Block Reward: 15 Coins Diff Adjustment: 700 Blocks Premine:. austrocoin_atc.png 2014-06 ATC AustroCoin 2014-06-03 60 8500000 dubeucoin_dbu.png 10 blocks 2014-12 DBU Dubeucoin 2014-12-10 500 30 2000000000 Dubeucoin is a lite version of Bitcoin using scrypt as a proof-of-work algorithm., 30 second block targets, subsidy halves in 840k blocks (~4 years), ~2 billion total coins, The rest is the same as Bitcoin., 500 coins per block, 10 blocks to retarget difficulty ('h', 840000, 'b') binarycoin_bic.png 2013-12 BIC Binarycoin 9 blocks premined re-target every .2 days, 20 coins per block, and 84 million total coins. BINARYCOIN BIC Website SPECIFICATIONS Algorithm: Scrypt Max Coins: 84,096,000 Block Time: 15 Seconds Difficulty Retarget Time: 0.2 days Premine: 9 blocks as test Starting difficulty: 0.00024414 20 coins per block DOWNLOADS Windows Linux MacOS Source Code Sample binarycoin.conf server=1 rpcuser=%USER% rpcpassword=%RPCPASS% addnode=198.98.102.131 PORTS Unknown. POOLS Cryptopools EXCHANGES None Available. SOCIAL None Available. SERVICES / OTHER None Available. Launch thread: Abbreviation: BIC Algorithm: SCRYPT Date Founded: 12/23/2013 Total Coins: 84.09 Million Confirm Per Transaction: Re-Target Time: Block Time: 15 Seconds Block Reward: 20 Coins Diff Adjustment. coffeecoin2_cfc2.png 1 block 2014-05 4 40 CFC2 CoffeeCoin 2014-05-05 20 relaunch 100000000 Algorithm: 100% Proof-of-Stake, Block Time: 20 Seconds, Difficulty Re-target: Every block, Total Coins: Starting at 100,000,000, Mined Block Confirmation: 40, Transaction Confirmation: 4, Minimum Stake Age: 24 Hours, Maximum Stake Age: 90 Days bumbacoin_clot.png 5 hours 2014-05 CLOT Bumbacoin 2014-05-02 10000 60 meme 201600000 ('h', 10080, 'b') 2112coin_yyz.png 360 blocks 2015-02 21 YYZ 2112coin 2015-02-14 0 120 120 21120000 No Premine, Based on Bitcoin 0.9.3 SHA256 source, Block target: 2 minutes, Difficulty retargets every 360 blocks, Block reward: 120 YYZ, halving every 88,000 blocks, Total coin supply: 21,120,000 YYZ, Coinbase maturity: 21 blocks ('h', 88000, 'b') pencoin_pen.png 2015-01 PEN PENCoin 2015-01-23 60 543732 total volume of coins released: 1,000,000 coins, volume of ICO: 250,000 coins, block time: 60 seconds, rewards:, first 2 days = 25 coins, another 2 days = 50, another 2 days = 75, after that, mining ends and coin is fully POS, yearly POS reward: 15%, algorythm: Scrypt sha1coin_sha.png 3.5 days 2014-03 SHA Sha1coin 2014-03-28 50 150 83550000 SHA-1 based cryptocoin with re-target every 2016 blocks, 50 coins per block, and 83.55 million total coins. ('h', 840000, 'b') imperiumcoin_impc.png 168 hours 2014-10 6 10 IMPC ImperiumCoin 2014-10-27 0 122 600 51240000 Premine: 0% - No Premine!, Coin type: Scrypt, Halving: 210000 blocks, Initial coins per block: 122 coins, Target spacing: 10 min, Target timespan: 168 h, Coinbase maturity: 10 blocks, Max coinbase= 51240000 coins, Confirmation : 6 ('h', 210000, 'b') bitcredits_credits.png 60 minutes 2014-01 5 CREDITS Bitcredits 2014-01-12 3.5% 10000 60 repertoire 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 From the international enterprise team who bring you GoldBar, Silverbar and PlatinumBar is the new BitCredits (Credits). maples_map.png 2013-09 MAP Maples 0 premined Clone of NVC. Maples - Maples Abbreviation: MPL Algorithm: SCRYPT Date Founded: 9/6/2013 Total Coins: 50 Million Confirm Per Transaction: 40 Re-Target Time: Block Time: 1 Minute Block Reward: 0-3000 50 Maples 3000-5000 30 Maples 5000-7500 20 Maples 7500+ 8 Maples per block halving every 3 years Diff Adjustment: Premine: 50K. wpcoin_wpc.png 2014-03 WPC WPcoin 2014-03-19 0 60 320000000 darkfoxcoin_drx.png 2014-08 6 100 DRX DarkFoxcoin 2014-08-09 0 95 60 15000000 Algorithm: Sha256, Total coins : 15,000,000, Coin maturity: 100 blocks, Transaction confirmation: 6, Last pow block: 7200 , Block reward: 95, Block time: 60 sec, Pow total coins 759,000 , Pos interest 9 %, Premine 0.5 %, Stake min age 12h, No max age. flashcoin_flc.png 2013-06 FLC Flashcoin 2013-06-02 re-target every 60 blocks, 1 coins per block, and 10 million total coins. bmwcoin_bmc.png 2014-01 BMC BMWcoin 2014-01-29 50 120 12000000 goldrushcoin_grc.png 2014-12 3 60 GRC GoldRushcoin 2014-12-12 60 25000000 The Gold Rush Coin is part of The Gold Rush Project. The overall project is taking the global gold rush research of Sam Sewell III and creating a full series of Books, Merchandising, and 2D and 3D animated versions of multiple games. Each step along the way and each game provides the player the ability to create and store wealth via gold backed crypto coin, Hybrid POW/POS, BlockTime 60 seconds, Block 1 - 500 = 1 GRCX, Block 501 - 720 = 5 GRCX, Block > 720 = 20 GRCX, 3 confirms per transaction., Total 25,000,000 GRCX, Scrypt POW, POS staking 24 hours - 42 days for max stake livecoin_lvc.png 2014-01 LVC Livecoin greenbackscoin_gb.png 2014-08 6 GB GreenBacks 2014-08-15 30 infrastructure 20000000 20 Million Initial Coins:, *Max of 10% additional coins created each year, dependent upon number of coins in staking, 100% Proof of Stake: , *10% Staking Interest – 8 Hours for coin maturity to begin staking, 30 Second Block Time & 6 block confirmations required:, *Transactions will be 25x faster than Bitcoin maidsafecoin_maid.png 2014-04 MAID Maidsafecoin 2014-04-22 0 1 intermediary token BTC-SAFE 430000000 MaidSafecoin is an intermediary token residing on the Bitcoin blockchain that will be exchanged on a 1:1 basis with safecoin the native currency of SAFE Network a P2P Internet platform. Safecoin is required to pay for services on the SAFE decentralized internet. litestarcoin_lts.png 1 block 2015-02 4 50 LTS Litestarcoin 2015-02-18 1.7% 1000 90 120000000 The perfect micropayment solution. Money over IP X11, PoW/Pos separate technologies with no effect on PoW mining, Advanced asic proof PoW/PoS coin, Random SuperBlock, True random, so no cheating by big hashpowers, 4 transaction confirmations (Very fast), 50 minted block confirmations, Short poS block time (less than 1 minute), 90 sec PoW block time, diff retarget each block for PoW, Daily random superblock payout 10X, Weekly random superblock payout 100X, block payout reduced 20% every 20 days, Initial payout: 1k, 15 sec PoS block time, PoS diff retarget each block , Block minted: 14 Total: 2005000 (1.7%), Min: 1 day / Max 100 days, Variable PoS payout: 60% first year / 20 % second year, Max suppli: 120 millions approx ('rp', 20, 20, 'd') findyoucoin_find.png 2014-11 20 FIND Findcoin 2014-11-19 100% 120 100000000 100% of FIND will be distributed via in-wallet faucet. All you have to do is just keep your wallet running to receive the funds. By doing so you will also be supporting the FIND network, FindYou Coin Specs, Name: FindYou Coin, Thicker: FIND, Initial coin supply: 100,000,000 FIND (will be distributed via faucet), Distribution period: 21 days (200,000 coins per hour), POS annual interest: 1%, No POW rewards. Algo is X13 (if that makes any difference - FIND will be 100% distributed via in-wallet faucet), Same distribution model as miraclecoin?, Faucet distributes 200 000 coins/hour. microcoin_mrc.png 2014-01 MRC microCoin 0 100000000000 Algorithm: Scrypt-Jane Proof-Of-Work with modified nFactor Block Time: 32 seconds No Premine bitswiftcoin_swift.png 2014-10 50 SWIFT BitSwift 飞速币 2014-10-02 30 blocknet 4000000 X15, 4 Million SWIFT Total Supply, 3% PoS Annually, Minimum 4 Hour Minimum Stake Age, No SWIFT Transfer Fees!, 30 Second Blocks xanaxcoin_xnx.png DGW 2015-04 XNX Xanaxcoin 2015-04-25 0 210 120 42000000 Name: XANAXCOIN, Symbol: XNX, Algorithm: SCRYPT(POW), Block Reward: 210, Block Reward Halving Rate: 100000, Block time: 120 sec, Difficulty retarget: D.G.W., Premine: NO!, Total supply: 42 000 000 ('h', 100000, 'b') lycancoin_lyc.png 2014-03 LYC Lycancoin 4950000000.0 iridiumcoin_iri.png 5 hours 2014-08 IRI IridiumCoin 2014-08-12 0.7% 770 300 77000000 ('h', 50000, 'b') eoncoin_eon.png 2014-04 EON EonCoin 1000000000 Algorithm : Scrypt Pow + PoS Symbol : EON Block target : 30 seconds Block reward : 5000 EON Retarget Every block Max PoW coins : 1 Billions EON P2P port : 7201 RPC port : 7200 teddycoin_tdy.png 2014-07 | http://bel-epa.com/download/xml | CC-MAIN-2017-17 | refinedweb | 35,007 | 61.16 |
NAME
App::dupfind::Threaded::ThreadManagement - Thread management logic, abstracted safely away in its own namespace
VERSION
version 0.172690
DESCRIPTION
Safely tucks away the management of threads and all the threading logic that makes the Map-Reduce feature of App::dupfind possible. Thanks goes out to BrowserUk at perlmonks.org who helped me get this code on the right track.
Please don't use this module by itself. It is for internal use only.
METHODS
- clear_counter
The "counter" in App::dupfind::Threaded::ThreadManagment is used to keep track of how many items have been processed by the thread pool. This clear_counter method resets the counter.
- counter
The counter itself is a read-only accessor around a thread-shared scalar int
- create_thread_pool
Method that spawns N threads to do work for the map-reducer, where N is the number of threads that the user has specified with the "--threads N" flag.
If the user has requested a progress bar, one more thread is spawned which does nothing but monitor overall progress of the thread pool and update the progress bar.
- delete_mapped
Deletes a same-size file grouping from the global hashref of same-size file groupings, by key. Thread-safe management of the datastructure is insured.
- end_wait_thread_pool
Ends the global work queue, sets the thread terminate flag to 1, and joins the threads in the pool.
- increment_counter
Thread-safe way to increment the global shared counter (scalar int value). The counter is key to the successful execution of all threads, and it is imperative that any code executed by the map-reduce engine properly calls increment_counter for work every item it processes.
- init_flag
R/W accessor whose read value indicates that the thread pool has been initiated when the return value is true.
- mapped
Read-only accessor to the global shared hashref of "work items", i.e.- the groupings of same-size files which are potential duplicates
- push_mapped
The thread-safe way to push a new item or items onto a grouping in the global mapped work-item registry. $obj->push_mapped( key => @items );
- reset_all
Resets all flags, queues, work mappings, and counters.
In turn, it calls:
$self->reset_queue
$self->clear_counter
$self->reset_mapped
$self->init_flag( 0 )
$self->term_flag( 0 )
- reset_mapped
Destroys the globally-shared mapping of work items. Takes no arguments.
- reset_queue
Creates a new Thread::Queue object, which is then accessible via a call to $self->work_queue. This does not end the previous Thread::Queue object. That is the responsibility of $self->end_wait_thread_pool
- term_flag
R/W accessor method that indicates to threads that they should exit when a true value is returned.
- threads_progress
This method is executed by the single helper thread whose sole responsibility is to update the progress bar for the thread pool, if the user has requested a progress bar
- work_queue
Thread-safe shared Thread::Queue object for the current $self object. It can be ended via $self->end_wait_thread_pool and it can thereafter be recreated via $self->reset_queue
The map part of the map-reduce engine pushes work items into this queue. | https://metacpan.org/pod/release/TOMMY/App-dupfind-0.172690/lib/App/dupfind/Threaded/ThreadManagement.pm | CC-MAIN-2021-21 | refinedweb | 504 | 61.87 |
Over.
technorati: grizzly sailfin glassfish
v2ur1 sip
Great.
Posted by: themanwiththecigar on February 13, 2008 at 06:13 AM:Controller.setProtocolChainInstanceHandler() or you can set it per SelectorHandler:SelectorHandler.setProtocolChainInstanceHandler()So to answer your last question, your don't need to have 2 Controller, just need to create two instance of a ProtocolChainInstanceHandler, and set them on the SelectorHandler handling cA and cB :-)Thanks!
Posted by: jfarcand on February 13, 2008 at 07:23 AM
Thanks,.
Posted by: themanwiththecigar on February 13, 2008 at 08:39 AM
The!
Posted by: jfarcand on February 13, 2008 at 06:52 PM
Bon
Sébastien
Posted by: survivant on March 01, 2008 at 04:27 PM
Hi
Posted by: jfarcand on March 03, 2008 at 09:08 AM:41 AM
Where is the stack for filtering the SIP messages.Do you have the full source code in samples or somewhere we can download and study.
Posted by: nikhil_garg on May 05, 2009 at 07:13 AM
Hello,.
Posted by: moster on June 25, 2009 at 06:49 AM
Bonjour JF.
I am trying to create my one protocolparserfilter but I can't find the package for ProtocolParser.
Could you help me please!
thanks
Posted by: moster on July 01, 2009 at 03:55 AM
Some more precision for my request:
I made:
import com.sun.grizzly.ProtocolParser;
but it is not accepted.
The external Jar that I add is grizzly-framework-1.7.3.2.jar
THANK YOU
Posted by: moster on July 01, 2009 at 04:01 AM | http://weblogs.java.net/blog/jfarcand/archive/2008/02/writing_a_tcpud_1.html | crawl-002 | refinedweb | 255 | 65.52 |
Unity3D and Mecanim: The Good, the Bad, and the Strange
I have spent the better part of the past two month using Unity 4 and building a game with the new Mecanim animation system. With the video tutorial and material out there, it seems like a great next step for managing animations in Unity. This post outlines some of my experiences and solutions to make the Mecanim system work with non-trivial situations.
Creating a Good Setup for Mecanim
When starting with any new system or concept, I think it is best to start from scratch. Eliminate all other variables and concentrate on the new functionality or API. For my journey, that begins inside of my 3d modeling package of choice – Blender.
Creating models and armatures that play well with Mecanim is a pre-requisite. To begin, I studied the Mecanim project in Unity and see how they create, place, and name all of the bones. While all of the bones are required to have a successful setup, I tried to create a rig that has all of the potentials bones ( i.e. eyes, mouth, finger joints).
An important point when rigging is making sure everything is on a 2D plane. You can easily check this by going into the top view of your animation package and making sure everything is flat.
I didn’t thoroughly test it, but naming the bones the same as the Unity mecanim character helps. This is particularly helpful when rigging the hand all of its joints. If you are interested in the rigging algorithm and how it is calculated, I highly recommend the Auto-Setup of a Humanoid article by Rune Johanson. This guy is crazy smart.
One point that helps auto-setup work is having either bones be perfectly straight up/down, or perfectly left/right. Having the trapezius area or fingers at 45 degree angles can mess up the calculations. A good way to test is to configure the rig and move bones around.
It is easy to move the bones around until you get the rotation correct. You can see in the image above that the rotation is not correct for the left shoulder. Using the rotate tool in Unity, you can see exactly what needs to be done to the armature to put it in the correct T-Pose. Unity does come with an option to automatically fix the pose, but I think it is better to start it off correctly. It is good to have a model that imports perfectly into Unity to streamline the process.
Animation Baking Sweetness ( Pun intended!)
A very important part of Mecanim that hasn’t had much explanation is the animation settings in the Import area. You can get to this area by going to the import settings and clicking the Animation button.
Selecting an animation from the clips will present options on how you want Unity to handle the animations. For my purpose, I am creating the motion inside of my 3d modeling tool. Unity will use the character translations and move the game object based off of it. This type of motion is considered “Root Motion”.
Each Root Transform feature has “Bake int Pose” checkbox. This checkbox tells Unity how it should handle animation changes to movement and rotation. If it is unchecked, it means Unity will use the data and update the transform of the game object inside of Unity. If checked, Unity will not update the game object’s transform.
For example, I could have a character animation that skips and bounces up and down. In this case I would NOT want Unity to update the transform when the character is going up and down. Doing this will make a camera follow script bounce up and down as well since it follows a gameobject’s transform. This would get annoying very quickly.
To avoid this we can bake the root transform position (Y). You can see it in the image above. This will tell Unity to ignore any Y axis changes to the position for the game object it is attached to. The transform will stay grounded now and the camera bouncing will disappear
For running/walking animations, it is good to bake the Root Transform Rotation. With Mecanim, the root position is around the hips. Hips move left to right when running, so the camera will be swinging around the character if you don’t bake the rotation in.
Animation Events
With the first iteration of the Mecanim system, there is a very limited API on what you can access and control. You can change the animation speed for all animations, (animator.speed), but you cannot target a specific animation and update its speed. Making the API more robust is in the roadmap, but for now you have to make due.
A troubling area for me was creating animation events at different points. Since there is no options right now in Mecanim, I was contemplating going back to the old system. Instead, I decided to create a script that will handle animation timing similar to animation events would. The basic setup is like the following:
using UnityEngine; using System.Collections; public class FireBallAttack : MonoBehaviour { public GameObject fireballPrefab; private Animator animator; private float attackAnimationLength = 4.1f; private float attackDelay = 2.2f; public void Setup () { animator = GetComponent(); animator.SetBool("Attack", true); animator.SetLayerWeight(0, 0); //normal animation layer animator.SetLayerWeight(1, 0); //additive animation layer animator.SetLayerWeight(2, 1); //override animation layer StartCoroutine("attack"); StartCoroutine("cleanup"); } public IEnumerator attack() { yield return new WaitForSeconds(attackDelay); poweringUp.particleSystem.Stop(); Vector3 fireballPosition = transform.position + (transform.forward*2) + (transform.up*2); GameObject _tempFireball = (GameObject)Instantiate (fireball, fireballPosition, Quaternion.Euler(transform.forward )); _tempFireball.GetComponent().moveDirection = transform.forward * 15.0f; } public IEnumerator cleanup() { yield return new WaitForSeconds(attackAnimationLength); //return to normal layer animator.SetLayerWeight(0, 1); animator.SetLayerWeight(1, 0); animator.SetLayerWeight(2, 0); animator.SetBool("Attack", false); } }
Without going line by line, here is a breakdown on what the code is doing:
- Setup() – will be called externally when you want the animation to start.
- Start co-routines attack() and cleanup() to start doing the work
- Attack() – This is in an IEnumerator so it can use the yield statement. you can do your logic here with instantiating or doing other effects. This attack doesn’t start until 2.2 seconds, so it waits to run the code until the appropriate time.
- Cleanup() – Attacking is done when the animation is complete. Destroy any miscellaneous objects you don’t want and set the layer weights back to what they were before.
I don’t believe you can access the animation clip’s length from the API, so I manually code how long the animation is. Hopefully this property can be accessed in the future releases.
This is more or less the framework I am using for the game I am working on. I will do some posts about my progress in the coming weeks. Because I have multiple attacks, I am actually using an interface and have a controller layer that manages which attack is playing.
In the script above, you will notice that I was changing the “Attack” bool to true/false at the beginning and end of the script. This helps other scripts determine whether an attack is in progress or not. You don’t want to start doing a crazy attack while in the middle of a jump. The SetBool() is accessible from the Animator class, so anything should be able to reference it if needed.
The Road Less Traveled
Mecanim is a great feature in Unity that really helps manage sophisticated animation logic. It still has a ways before it reaches its potential, but I do think that it will get significantly better in the coming releases. Hopefully now you have a better idea of some of Mecanim’s strengths and weakness – as well as how to overcome them.
Really enjoy what you are doing, are you incorporating melee combat into your game, and if so, how are you implementing it with mecanim? Are you using the animator for your states? I’ll limit the questions to this (for now). Can’t wait to see more.
Nice work. I am planning to mess around with Mecanim at some point, and if I do, I’ll probably start with the info here. Looking forward to seeing your progress on this.
I am doing melee combat as well as projectiles ( think anime action genre). For the attacks and events, Each attack has an ID that it belongs to. The ID helps Mecanim know which animation to play. it is stored as an int in the Animator window ( animator.GetInt() ). All of the attacks use a common interface ( called IAttack), so it is easy to swap out the implementations of them as long as they use it. It also makes it easy for different attacks to be on different layers when they need to be referenced. That is why I am manually setting layer weights in the attack script. This allows each attack to be vastly different using the same logic structure. I will probably post more on the game architecture a little later.
Nice article Scott, while it’s completely obvious now I really wasn’t sure how I’d get Y movement into my animations while not using apply root motion to the animator! Thanks…
On a side note/idea. We’re using the tag property on each animation in the mecanim flow config so we can identify each animation, or group of them via animator.GetCurrentAnimatorStateInfo and animator.GetNextAnimatorStateInfo. To get the id’s we have to hash the stirngs but with the combination of those calls we can determine the entry, looping and exit of each state.
Another bonus is if we use the same tag with two different animations they appear as a single state. The only downside is if you want to do also drive physics you need to have a second state engine as Mecanim transitions between animations on it’s own terms (not always instantly). Not ideal but a solid implementation none the less :-)
Thanks for the insight. That is a good idea to use the tags with the animation hashes to get the info about it. I know with the last version of Unity 4.2, they opened up more of the Mecanim API, so there might even be better ways to manage animation states now. Cheers!
Hello scott, thanks a lot for this :D
Hmm, i’m newbie to use unity. I wanna ask you, in my project, i want my model 3D have several different animation clip. What’s the best choice do you think, whether i have to make 3D model with different animation one by one (like: model1.fbx , model2.fbx, etc) or i have to make one model3D with several animation directly. But, how can i make model3D with several animation directly? -.-
Thanks, warm regards. XD
I think that mecanim system is god only for retargetting animation of other model, but to have a complete control of the character animation is better than animator, whit animation you can control speed and time of a single animation, with animation you have a deepest control of any single animation.
And for retargetting animation i use icole and 3dxchange on a single model than i import it into unity.
So mecanim is unnecessary at least.
I usually just keep all of the different animations inside one file. It makes updating and managing a long list easier. I have heard of specific workflows where doing them separately is beneficial, but not until you start working on larger teams. Hope that helps. | http://www.scottpetrovic.com/blog/2013/03/unity3d-and-mecanim-the-good-the-bad-and-the-strange/ | CC-MAIN-2018-17 | refinedweb | 1,943 | 56.25 |
's why i have open the question.
My both the Url's are register on the Internet. but all the request will hit to mail.domain.com. just i have configure my legacy.domain.com in CAS Server for Exchange 2003 USers and active sync.
Know i am looking for some expert connents what will be the impact on the active sync users. when they hit mail.domin.com and proxying to legacy.domin.com.
Watch this video to see how easy it is to make mass changes to Active Directory from an external text file without using complicated scripts.
Exchange won't use legacy.domain.com to access the Exchange 2003 server - that traffic is HTTP based and an internal request, so the SSL cert / cert name isn't relevant beyond the CAS server.
Request will start from mail.domain.com goes to CAS server , CAS serve check the mailbox server and proxy the request to Exchange 2003 server.
For Exchange 2003 we are using "Legacy " that means url will change , so is there any imapct on the active sync users.Because when you use owa with same settings url change on the internet exploere from mail.domin.com to legacy.domin.com.
So during access the mail from phone that could be the chalange .
I don't know url will chnge on the active sync users what will be the impact ?
Legacy.domain.com is only relevant for Exchange 2010 / 2007 co-existence.
Extract from link provided earlier:
"Are there any configuration changes I must make on my Exchange 2007 Client Access servers?
In order to introduce Exchange 2010 into your "Internet Facing AD Site" and support your Exchange 2007 (and possibly 2003) mailboxes, you will move the primary EAS namespace that is associated with the Exchange 2007 CAS array and associate it with the Exchange 2010 CAS array. In addition, you will create a new namespace for legacy access, legacy.contoso.com (note that the name can be anything you want) and associate it with your Exchange 2007 CAS array."
Thanks for your clearification.
1.Installing Exchange 2010 within your organization on new hardware.
2.Configuring Exchange 2010 Client Access.
3.Creating a set of legacy host names and associating those host names with your Exchange 2003 infrastructure.
4.Obtaining a digital certificate with the names you'll be using during the coexistence period and installing it on your Exchange 2010 Client Access server.
5.Associating the host name you currently use for your Exchange 2003 infrastructure with your newly installed Exchange 2010 infrastructure.
6.Moving mailboxes from Exchange 2003 to Exchange 2010.
7.Decommissioning your Exchange 2003 infrastructure.The relevant part to answer your question is:
."
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial | https://www.experts-exchange.com/questions/27630905/Impact-on-Active-Sync-during-Exchagne-2003-to-Exchange-2010-Migration.html | CC-MAIN-2018-30 | refinedweb | 487 | 58.79 |
September 2015
Volume 30 Number 9
Data Points - Revisiting JavaScript Data Binding -- Now with Aurelia
By Julie Lerman
I’ve never been much of a front-end developer, but every once in a while I have a reason to play with a UI. In my June 2012 column, after seeing a user group presentation on Knockout.js, I dug in and wrote an article about data binding OData in a Web site using Knockout (msdn.microsoft.com/magazine/jj133816). A few months later, I wrote about adding Breeze.js into the mix to make the data binding with Knockout.js even easier (msdn.microsoft.com/magazine/jj863129). When I wrote about modernizing an old ASP.NET 2.0 Web Forms app and using Knockout again in 2014, I got teased by some friends who said that Knockout was “so 2012.” Newer frameworks such as Angular also did data binding and so much more. But I wasn’t really interested in the “so much more,” so Knockout was fine for me.
Well, now it’s 2015 and while Knockout is still valid and relevant and awesome for JavaScript data binding, I wanted to spend some time with one of the new frameworks and chose Aurelia (Aurelia.io) because I know so many Web developers who are excited about it. Aurelia was started by Rob Eisenberg, who was behind Durandal, another JavaScript client framework, though he stopped production on that to go work on the Angular team at Google. Eventually he left Angular and, rather than reviving Durandal, created Aurelia from the ground up. There are a lot of interesting things about Aurelia. I have a lot more to learn for sure, but want to share with you some of the data-binding tricks I learned, as well as a bit of trickery with EcmaScript 6 (ES6), the latest version of JavaScript, which became a standard in June 2015.
An ASP.NET Web API to Serve Data to My Web Site
I’m using an ASP.NET Web API I built to expose data I’m persisting with Entity Framework 6. The Web API has a handful of simple methods to be called via HTTP.
The Get method, shown in Figure 1, takes in some query and paging parameters and passes them on to a repository method that retrieves a list of Ninja objects and their related Clan objects using an Entity Framework DbContext. Once the Get method has the results in hand, it transforms those results into a set of ViewListNinja Data Transfer Objects (DTOs), defined elsewhere. This is an important step because of the way JSON is serialized, which is a bit overzealous with circular references from the Clan back to other Ninjas. With the DTOs, I avoid a wasteful amount of data going over the wire and I get to shape the results to more closely match what’s on the client.
Figure 1 The Get Method from the Web API
public IEnumerable<ViewListNinja> Get(string query = "", int page = 0, int pageSize = 20) { var ninjas = _repo.GetQueryableNinjasWithClan(query, page, pageSize); return ninjas.Select(n => new ViewListNinja { ClanName = n.Clan.ClanName, DateOfBirth = n.DateOfBirth, Id = n.Id, Name = n.Name, ServedInOniwaban = n.ServedInOniwaban }); }
Figure 2 shows a view of the JSON results from that method based on a query that retrieved two Ninja objects.
Figure 2 JSON Results from Web API Request for List of Ninjas
.png)
Querying the Web API Using the Aurelia Framework
The Aurelia paradigm pairs one view model (a JavaScript class) with one view (an HTML file) and performs data binding between the two. Therefore, I have a ninjas.js file and a ninjas.html file. The Ninjas view model is defined as having an array of Ninjas, as well as a Ninja object:
export class Ninja { searchEntry = ''; ninjas = []; ninjaId = ''; ninja = ''; currentPage = 1; textShowAll = 'Show All'; constructor(http) { this.http = http; }
The most critical method in ninjas.js is retrieveNinjas, which calls the Web API:
retrieveNinjas() { return this.http.createRequest( "/ninjas/?page=" + this.currentPage + "&pageSize=100&query=" + this.searchEntry) .asGet().send().then(response => { this.ninjas = response.content; }); }
Elsewhere in the Web app I’ve set up the base URL that Aurelia will find and incorporate it into the request’s URL:
x.withBaseUrl('');
It’s worth noting that the ninjas.js file is just JavaScript. If you’ve used Knockout, you might recall that you have to set up the view model using Knockout notation so that when the object is bound to the markup, Knockout will know what to do with it. That’s not the case for Aurelia.
The response now includes the list of Ninjas and I set that to the ninjas array of my view model, which gets returned to the ninjas.html page that triggered the request. There’s nothing in the markup that identifies the model—that’s taken care of because the model is paired with the HTML. In fact, most of the page is standard HTML and some JavaScript, with just a few special commands that Aurelia will find and handle.
Data Binding, String Interpolation and Formatting
The most interesting part of ninjas.html is the div, which is used to display the list of ninjas:
<div class="row"> <div repeat. <a href="#/ninjas/${ninja.Id}" class="btn btn-default btn-sm" > <span class="glyphicon glyphicon-pencil" /> </a> <a click. <span class="glyphicon glyphicon-trash" /> </a> ${ninja.Name} ${ninja.ServedInOniwaban ? '[Oniwaban]':''} Birthdate:${ninja.DateOfBirth | dateFormat} </div> </div>
The first Aurelia-specific markup in that code is repeat.for “ninja of ninjas,” which follows an ES6 paradigm for looping. Because Aurelia comprehends the view model, it understands that “ninjas” is the property defined as an array. The variable “ninja” can be any name, for example “foo.” It just represents each item in the ninjas array. Now it’s just a matter of iterating through the items in the ninjas array. Skip down to the markup where the properties are displayed, for example “${ninja.Name}.” This is an ES6 feature that Aurelia takes advantage of, which is called string interpolation. String interpolation makes it easier to compose strings with variables embedded in them rather than, for example, by concatenating. So with a variable name=“Julie,” I can write in JavaScript:
`Hi, ${name}!`
and it will be processed as “Hi, Julie!” Aurelia takes advantage of ES6 string interpolation and infers one-way data binding when it encounters that syntax. So that last line of code, beginning with ${ninja.Name} will output the ninja properties along with the rest of the HTML text. If you code in C# or Visual Basic, it’s worth noting that string interpolation is a new feature of both C# 6.0 and Visual Basic 14.
I did have to learn a bit more JavaScript syntax along the way, for example the conditional evaluation of the ServedInOniwaban Boolean, which turns out to have the same syntax as C#—condition? true : false.
The date formatting I’ve applied to the DateOfBirth property is another Aurelia feature and may be familiar to you if you’ve used XAML. Aurelia uses value converters. I like to use the moment JavaScript library to help with date and time formatting and am taking advantage of that in the date-format.js class:
import moment from 'moment'; export class dateFormatValueConverter { toView(value) { return moment(value).format('M/D/YYYY'); } }
Keep in mind that you do need “ValueConverter” in the class name.
At the top of the HTML page, just under the initial <template> element, I have a reference to that file:
<template> <require from="./date-format"></require>
Now the string interpolation is able to find the dateFormat[ValueConverter] in my markup and apply it to the output, as shown in Figure 3.
Figure 3 Displaying All Ninjas with Properties Bound One Way by Aurelia via String Interpolation
.png)
I want to point out another instance of binding in the div, but it’s event binding, not data binding. Notice that in the first hyperlink tag I use a common syntax, embedding a URL in the href attribute. In the second, however, I don’t use href. Instead, I use click.delegate. Delegate is not a new command, but Aurelia does handle it in a special way that’s much more powerful than a standard onclick event handler. Read more at bit.ly/1Jvj38Z. I’ll continue to focus on the data-related binding here.
The edit icons lead to a URL that includes the ninja’s ID. I’ve instructed the Aurelia routing mechanism to route to a page called Edit.html. This is tied to a view model that’s in a class named Edit.js.
The most critical methods of Edit.js are for retrieving and saving the selected ninja. Let’s start with retrieveNinja:
retrieveNinja(id) { return this.http.createRequest("/ninjas/" + id) .asGet().send().then(response => { this.ninja = response.content; }); }
This builds a similar request to my Web API as before, but this time appending the id to the request.
In the ninjas.js class I bound the results to a ninjas array property of my view model. Here I’m setting the results, a single object, to the ninja property of the current view model.
Here is the Web API method that will get called because of the id that’s appended to the URI:
public Ninja Get(int id) { return _repo.GetNinjaWithEquipmentAndClan(id); }
The results from this method are much richer than those returned for the ninja list. Figure 4 shows the JSON returned from one of the requests.
Figure 4 JSON Results of a WebAPI Request for a Single Ninja
.png)
Once I’ve pushed the results into my view model, I can bind the properties of the ninja to elements in the HTML page. This time I use the .bind command. Aurelia infers if something should be bound one-way or two-way or some other way. In fact, as you saw in ninjas.html, it used its underlying binding workflow when presented with the string interpolation. There it used one-time, one-way binding. Here, because I’m using the .bind command and am binding to input elements, Aurelia infers that I want two-way binding. That’s its default choice, which I could override by using .one-way or another command in place of .bind.
For brevity, I’ll extract just the relevant markup rather than the surrounding elements.
Here’s an input element bound to the Name property of the ninja property in the model I passed back from the modelview class:
<input value.
And here’s another, this time bound to the DateOfBirth field. I love that I could easily reuse the date-format value converter even in this context, using the syntax I learned earlier:
<input value.
I also want to list the equipment on the same page, similar to how I listed the ninjas so they can be edited and deleted. For demonstration, I’ve gone as far as displaying that list as strings, but haven’t implemented the edit and delete features, nor a way to add equipment:
<div repeat. ${equip.Name} ${equip.Type} </div>
Figure 5 shows the form with the data binding.
Figure 5 The Edit Page Displaying the Equipment List
.png)
Aurelia also has a feature called adaptive binding, which allows it to adapt its binding capabilities based on available features of the browser or even the objects that are passed in. It’s pretty cool and designed to be able to work alongside browsers and libraries as they evolve over time. You can read more about the adaptive binding at bit.ly/1GhDCDB.
Currently, you can only edit the ninja name, birth date and Oniwaban indicator. When a user unchecks Served in Oniwaban and hits the Save button, this action calls into my view model save method, which does something interesting before pushing the data back to my Web API. Currently, as you saw in Figure 4, the ninja object is a deep graph. I don’t need to send all of that back for saving, just the relevant properties. Because I’m using EF on the other end, I want to be sure the properties I didn’t edit also go back so they don’t get replaced with null values in the database. So I’m creating an on-the-fly DTO called ninjaRoot. I’ve already declared ninjaRoot as a property of my view model. But the definition of ninjaRoot will be indicated by how I build it in my Save method (see Figure 6). I’ve been careful to use the same property names and casing that my WebAPI expects so it can deserialize this into the known Ninja type in the API.
Figure 6 The Save Method in the Edit Model-View
save() { this.ninjaRoot = { Id: this.ninja.Id, ServedInOniwaban: this.ninja.ServedInOniwaban, ClanId: this.ninja.ClanId, Name: this.ninja.Name, DateOfBirth: this.ninja.DateOfBirth, DateCreated: this.ninja.DateCreated, DateModified: this.ninja.DateModified }; this.http.createRequest("/ninjas/") .asPost() .withHeader('Content-Type', 'application/json; charset=utf-8') .withContent(this.ninjaRoot).send() .then(response => { this.myRouter.navigate('ninjas'); }).catch(err => { console.log(err); }); }
Notice the “asPost” method in this call. This ensures the request goes to the Post method in my Web API:
public void Post([FromBody] object ninja) { var asNinja = JsonConvert.DeserializeObject<Ninja> (ninja.ToString()); _repo.SaveUpdatedNinja(asNinja); }
The JSON object is deserialized into a local Ninja object and then passed on to my repository method, which knows to update this object in the database.
When I return to the Ninjas list on my Web site, the change is reflected in the output.
More Than Just Data Binding
Keep in mind that Aurelia is a much richer framework than for just data binding, but true to my nature, I focused on data in my first steps to explore this tool. You can learn so much more at the Aurelia Web site, and there’s a thriving community at gitter.im/Aurelia/Discuss.
I want to proffer a nod of thanks to the tutaurelia.net Web site, a blog series about combining ASP.NET Web API and Aurelia. I leaned on author Bart Van Hoey’s Part 6 for my first pass at displaying the list of ninjas. Perhaps by the time this article is published, there will be more posts added to the series.
The download for this article will include both the Web API solution and the Aurelia Web site. You can use the Web API solution in Visual Studio. I’ve built it in Visual Studio 2015 using Entity Framework 6 and the Microsoft .NET Framework 4.6. If you want to run the Web site, you’ll need to visit the Aurelia.io site to learn how to install Aurelia and run the Web site. You can also see me demonstrate this application in my Pluralsight course, “Getting Started with Entity Framework 6,” at bit.ly/PS-EF6Start. expert for reviewing this article: Rob Eisenberg (Durandal Inc.)
Rob Eisenberg has a decade of experience focusing on user interface architecture and engineering. He's the creator of multiple UI frameworks such as Caliburn.Micro and Durandal, which have been used by thousands of developers worldwide. A former Angular 2.0 team member, Rob currently leads the Aurelia project, a next generation JavaScript client framework for mobile, desktop and Web that leverages simple conventions to empower your creativity. Learn more at | https://docs.microsoft.com/en-us/archive/msdn-magazine/2015/september/data-points-revisiting-javascript-data-binding-now-with-aurelia | CC-MAIN-2019-47 | refinedweb | 2,571 | 65.22 |
Meeting:Packaging IRC log 20070102
From FedoraProject
(09:08:21 AM) The topic for #fedora-packaging is: Channel for Fedora packaging related discussion | Next Meeting: Tuesday, January 2nd, 2007 17:00 UTC (09:08:47 AM) tibbs: Well, there's one more. (09:08:49 AM) scop [n=scop@cs27014175.pp.htv.fi] entered the room. (09:08:57 AM) tibbs: And another. (09:09:04 AM) scop: hi (09:09:08 AM) abadger1999: Hello (09:09:19 AM) tibbs: That's five when spot finishes his phone call. (09:21:04 AM) spot: sorry guys, still on this call (09:21:10 AM) spot: hooray for eager users (09:22:56 AM) ***scop needs to go in about 15 minutes (09:23:17 AM) XulChris: can we quickly discuss php additions to packaging guidelines? (09:23:23 AM) XulChris: please (09:24:11 AM) tibbs: What's the URL of the draft proposal? (09:24:38 AM) RemiFedora: i've not post it on the wiki, but only on the ML (09:24:46 AM) RemiFedora: (09:24:53 AM) RemiFedora: (09:26:40 AM) RemiFedora: i probably should have updated the PackagingDrafts/PHP page (09:26:41 AM) scop: "no file in BUILD directory": I think that's nothing specific to PHP packages, no packages should extract anything directly there (09:27:25 AM) tibbs: Still, all PHP packages will have that errant file, so it doesn't really hurt to mention it there. (09:27:55 AM) scop: sure, but we should check that it is mentioned somewhere more "generic" in addition (09:28:08 AM) tibbs: Are the rpmdevtools and php-pear-PEAR-Command-Packaging templates the same? (09:28:42 AM) RemiFedora: nearly. rpmdevtools must be filed, php-pear-PEAR-Command-Packaging is automatic. (09:29:25 AM) tibbs: That makes sense; it's like the perl template and cpanspec, which fills in the template. (09:29:42 AM) scop: FWIW, I don't like the "php_zend_api" or "php-abi" naming (09:29:54 AM) tibbs: It's not for us to choose, however. (09:29:58 AM) scop: it's different from everything else (09:30:19 AM) ***spot is off his call (09:30:21 AM) scop: php(abi), php-zend(api) would be better, see python and ruby (09:30:27 AM) tibbs: I have no idea why they were chosen to be underscores. We weren't consulted about that. (09:30:34 AM) spot: +1 on scop's comment (09:30:45 AM) tibbs: Still, it's an F7-only thing, so it could be changed with no impact. (09:31:13 AM) tibbs: Anyone with a direct line to Joe Orton to ask why he chose those names? (09:31:34 AM) RemiFedora: there's a little mistake, it's Requires: php-zend-api = %{php_zend_api} (09:32:30 AM) abadger1999: scop: +1 (09:32:51 AM) tibbs: Well, that's a bit better, but the parenthesized bits seem to be more in line with how other packages work. (09:33:02 AM) spot: RemiFedora: can you work with Joe on possibly renaming the macros to match the existing guidelines? (09:33:05 AM) lutter: I think (1) and (2) are fine to stick into the guidelines as 'tips for packagers' .. we should really have more of those (09:33:51 AM) RemiFedora: Bug still open : (09:34:06 AM) abadger1999: RemiFedora: > Requires: php-api could be keep, but is not very useful (09:34:13 AM) abadger1999: Where does php-api come from then? (09:34:13 AM) spot: lutter: yeah, i think (1) and (2) fall under "good common sense", i don't see a problem with adopting them into the php guidelines (09:34:38 AM) tibbs: It was there in the PHP package when we started writing guidelines, so that's what we had to use. (09:35:27 AM) scop: could we go through the AI status quickly before I need to go? (09:35:32 AM) tibbs: I think I get the zend thing now; php-api is for script compatibility; php-zend-api is for compiled extension capability. ? (09:35:45 AM) abadger1999: Does php define a "php api version" that isn't really used, or is it something that we've made up? (09:35:45 AM) RemiFedora: i've asked for php-zend-api, because it's match the sanity check done by php when loading module. (09:35:53 AM) spot: abadger1999: yes. (09:36:03 AM) spot: abadger1999: look at the bugzilla link remi posted, it explains it somewhat (09:36:07 AM) abadger1999: k (09:36:44 AM) spot: scop: lets go over the AI status (09:37:16 AM) scop: I believe provides/obsoletes and debuginfo are writeup stuff (09:37:32 AM) spot: yeah, i think so (09:37:47 AM) abadger1999: agreed (09:37:53 AM) spot: whereas, icon handling and RPMGroups were vetoed? (09:37:53 AM) scop: iconcache still not very clear yet -> followup? (09:38:15 AM) tibbs: rpmgroups wasn't exactly vetoed. (09:38:55 AM) tibbs: We were just asking for the ability to go ahead and get RPM changed to allow it to be optional without changing any guidelines which require it. (09:39:11 AM) spot: tibbs: ah, ok. i'll talk to paul and see if he's still willing to do it (09:39:43 AM) tibbs: At this point, though, everyone wants to see how the infrastructure develops before actually removing it from the guidelines. (09:39:56 AM) ***spot nods (09:40:02 AM) spot: rpm is only a bit in flux atm. :) (09:40:10 AM) scop: I think making Group optional would cause some compatibility issues (09:40:23 AM) scop: certainly at least on the specfile level (09:40:42 AM) tibbs: Nobody could point to anything specific, but you don't know until you actually try it. (09:40:49 AM) spot: scop: ehh, i'm not sure it would, the only issues i know of would be tools that depend on that field being populated (09:41:03 AM) spot: undefined Group is the same as undefined Vendor/Packager (09:41:22 AM) spot: (if it is flagged as optional) (09:41:43 AM) ***spot did a fair amount of testing with it the first time around (09:41:45 AM) Rathann|work: what's the motivation behind removing Group: ? (09:41:55 AM) scop: well, I'm sure it would cause those specfile level incompats (09:41:56 AM) spot: Rathann|work: the field is populated with garbage values (09:42:04 AM) scop: $ rpmbuild -bb foo.spec (09:42:10 AM) scop: error: Group field must be present in package: (main package) (09:42:27 AM) spot: scop: if rpm is patched to flag it as optional, it doesn't throw that error (09:42:35 AM) scop: yes, but older versions do (09:42:37 AM) spot: it would be an FC7+ only change (09:42:39 AM) Rathann|work: spot: we mandated buildroot and version and release, why can't we mandate group? (09:42:57 AM) f13: Rathann|work: because Group is unnecessary duplication and often times wrong data. (09:42:57 AM) tibbs: Because we have comps, which is more expressive. (09:42:58 AM) spot: Rathann|work: well, we could, but nothing is using the Group data (09:42:59 AM) scop: which means packagers who want to get rid of it must pay the price of maintaining different specfiles (09:43:12 AM) tibbs: Not that we haven't discussed all of this before. (09:43:13 AM) f13: Rathann|work: the 'Group' field isn't used by any of the tools that represent packages by groups. (09:43:28 AM) tibbs: There were proposals to encode the comps information into Groups. (09:43:32 AM) scop: f13, I believe Synaptic does use it (09:43:35 AM) scop: (FWIW) (09:44:14 AM) f13: scop: yes, but thats not really the "tool of choice" of Fedora. (09:44:17 AM) spot: scop: since it would be optional, packagers can weigh the tradeoffs (09:44:28 AM) f13: scop: Fedora's tools of choice being anaconda, yum, pirut, pup, all make use of comps through yum. (09:44:31 AM) abadger1999: scop: Or %if %{fedora}0 it. But they could hold off on making changes until FC8+ if they want everything to stay nice and neat. (09:44:34 AM) f13: same with the composition tool. (09:45:02 AM) abadger1999: scop: I agree with the synaptic/smart argument... until there's an alternative. (09:45:13 AM) scop: shrug, just pointing out that "nothing is using the Group data" is not true (09:45:38 AM) spot: i think that the idea is that storing package grouping metadata as hardcoded entries in the spec is a bad way to do package grouping (09:45:41 AM) abadger1999: (alternative to the Group tag, rather than alternative to synaptic/smart :-) (09:45:44 AM) f13: scop: ok, any of the tools we care about. (09:45:58 AM) f13: I'm with spot on this one (09:46:05 AM) spot: especially if someone wants to make Fedora Game Machine with their own custom groupings (09:46:05 AM) scop: f13, that's subjective (no, I don't personally care about synaptic) (09:46:08 AM) f13: package grouping is really up to the person creating the repo, not building the package. (09:46:14 AM) f13: as such, the group data needs to be outside the package. (09:46:29 AM) abadger1999: f13: -1 to not caring about smart/synaptic (09:46:33 AM) tibbs: That's an excellent argument. (09:46:44 AM) scop: I suppose rpmbuild would have to fill *some* Group info anyway (09:46:49 AM) abadger1999: +1 to the rest of your argument (09:46:57 AM) f13: scop: at some point, the Fedora board or whatever needs to draw a line regarding what package tools they care about. We can't hold things up because joe's package tool doesn't yet support it. (09:47:01 AM) tibbs: It would use (none) as with other optional fields. (09:47:10 AM) f13: and honestly, at this point, we care about yum and yum based tools. (09:47:59 AM) ***spot thinks that the reason we do changes over time, in phases, is to let Fedora contributors/packagers keep their preferred tools running (09:48:22 AM) abadger1999: f13: The line that I'm seeing we're crossing is that we don't have a place to store full group information for every package yet. (09:48:24 AM) spot: libs change, APIs change, for better and worse (09:48:39 AM) abadger1999: f13: So package managers that use Group tag information have nothing to port to. (09:48:44 AM) abadger1999: (yet) (09:48:56 AM) scop: I can't agree with shrugging off everything that is not yum based when discussing changes in rpm (09:48:57 AM) tibbs: Could we resolve the PHP issue before we close? (09:49:00 AM) f13: which isn't so bad when the data is oft times wrong. (09:49:14 AM) ***Rathann|work agrees with scop (09:49:35 AM) spot: scop: i agree with you as well (09:49:38 AM) spot: ok. PHP (09:49:47 AM) f13: scop: sure, but the only reason that synaptic/apt doesn't use comps is they chose not to. Not because they couldn't, they just chose not to. Too bad for them. (09:50:02 AM) tibbs: There were three changes proposed for the PHP guidelines: (09:50:04 AM) f13: their tool doesn't represent the same data as the installer, which it really should. (09:50:20 AM) tibbs: The first two are just hints for packagers: (09:50:36 AM) abadger1999: f13: No. comps is not inclusive. (09:50:41 AM) tibbs: note the %setup -q -c must be used because PHP package tarballs will always have a file that's not in any directory. (09:51:10 AM) tibbs: note that rpmdevtools provides a template, and that pear make-rpm-spec will generate a specfile for you. (09:51:29 AM) spot: The first two items seem like common sense to me. +1 to both (09:51:31 AM) tibbs: These are just hints to packagers and don't really change any guidelines. (09:51:35 AM) tibbs: +1 to them. (09:51:47 AM) lutter: +1 for adding them (09:51:53 AM) abadger1999: +1 (09:51:54 AM) f13: abadger1999: and Group is invalid and not inclusive either. (09:52:30 AM) f13: abadger1999: so really, they're trading a non-inclusive for incorrect. (09:52:33 AM) spot: f13? scop? (09:52:54 AM) f13: spot: I don't know enough about php to really offer a valid vote. (09:53:16 AM) spot: f13: you really don't need to know anything about php here (09:53:16 AM) f13: +0 I guess. (09:53:26 AM) spot: packages shouldn't dump crap in BUILD/ (09:53:35 AM) f13: right, +1 (09:53:41 AM) scop: sorry, was distracted (09:53:44 AM) spot: there are templates available for spec files, or you can have one automade. (09:54:29 AM) spot: ok, thats +5 (09:54:33 AM) scop: +1 to 1) obviously, both PHP specific and generic mention somewhere (09:54:43 AM) Rathann|work: php(api) and php-zend(abi) are cosmetic really, although it'd be great if they could be made consistent with python/perl/ruby (09:54:48 AM) spot: on item number 3, the API version check (09:54:59 AM) spot: i really want them to be consistent with the normal api standards (09:55:07 AM) scop: about 2), does "pear make-rpm-spec Sources.tgz" generate acceptable results? (09:55:29 AM) Rathann|work: scop: if it doesn't, it can probably be made to do so (09:55:32 AM) f13: well, python and perl are autogenerated by rpm aren't they? (09:55:36 AM) spot: when that change is made php(abi)/php-zend(api), i'd vote for it, but as is, -1 (09:55:39 AM) f13: should we wait until php is too? (09:55:47 AM) RemiFedora: i think so, only the %file must be updated (09:55:47 AM) tibbs: scop: according to discussion here, it generates something almost identical to the rpmdevtools template but filled in. (09:56:19 AM) tibbs: Unfortunately we have something of a broken situation unless PECL modules depend on the proper symbol. (09:56:38 AM) scop: tibbs, Rathann|work, ok, if it does create good stuff, +1 - but it needs to be verified before it goes to guidelines (09:56:42 AM) tibbs: PECL modules are the compiled extentions.l (09:57:11 AM) tibbs: As only F7 provides the symbol, we can still get it changed. (09:57:12 AM) ***Rathann|work should check if the php stuff he packaged once is already in extras... (09:57:21 AM) abadger1999: scop: That sounds reasonable. (09:57:29 AM) tibbs: If any of the Red Hat folks can talk to Joe Orton about it, it would be great. (09:57:46 AM) spot: there is an open bugzilla ticket where jorton seems to be responsive (09:58:02 AM) spot: (09:58:13 AM) abadger1999: RemiFedora, XulChris: How much might pear make-rpm-spec Foo.tgx change its output over time? (09:59:18 AM) RemiFedora: i think "pear make-rpm-spec" provides a fedora specific specfile, XulChris ? (09:59:30 AM) XulChris: RemiFedora: i havnt tried it (09:59:38 AM) abadger1999: RemiFedora: If so, that would eliminate my worry. (10:00:32 AM) scop: PEAR-Command-Packaging might however have some issues with that continuing to be the case in the future (10:01:18 AM) tibbs: The template is supplied by the Fedora packager, I think. (10:01:18 AM) scop: eg. if it creates php(abi) and friends, that might not work outside of Fedora and upstream could be interested in leaning towards compatibility... (10:01:46 AM) scop: ah, that sounds good (10:01:49 AM) tibbs: Yes, that template will only change if the Fedora packager changes it. (10:03:28 AM) tibbs: So are we resolved to try to get the symbol changed to php-zend(api)? (10:03:45 AM) spot: tibbs: +1 from me (10:03:47 AM) abadger1999: tibbs: +1 (10:03:56 AM) Rathann|work: consistency is good (10:03:57 AM) tibbs: Or was it php(zend-api) ? (10:04:15 AM) tibbs: Which is more consistent? It's a symbol provided by the PHP package. (10:04:22 AM) spot: php(api) and php-zend(api) (10:04:32 AM) tibbs: I'm not exactly sure what we're trying to be consistent with. (10:05:14 AM) spot: honestly, i don't care which one (10:05:38 AM) Rathann|work: tibbs: um, perl and python (10:05:57 AM) scop: perl is "different" (10:06:03 AM) scop: python and ruby (10:06:12 AM) Rathann|work: right (10:06:26 AM) tibbs: Right, but what is the set of parenthesized symbols they provide? (10:06:33 AM) tibbs: I guess I can do rpm --provides myself... (10:06:41 AM) scop: (abi) (10:06:53 AM) RemiFedora: python have python(abi) and python-abi (10:07:01 AM) Rathann|work: right (10:07:17 AM) tibbs: ruby has no parentiesized provides. (10:07:29 AM) scop: python(abi) is automatic, python-abi is there only for historical reasons for distros that didn't autogenerate python deps (10:07:29 AM) tibbs: Ah, but ruby-libs does have ruby(abi). (10:07:35 AM) spot: ruby(abi) = 1.8 (10:07:55 AM) tibbs: So there's no precedent for php-zend(abi) versis php(zend-abi) (10:07:57 AM) spot: php-zend(abi) says "the abi of php-zend" (10:08:06 AM) spot: which is different from the abi of php (10:08:22 AM) tibbs: But there's no php-zend package. (10:08:36 AM) spot: tibbs: does that matteR? (10:08:46 AM) tibbs: That's what I'm asking. (10:08:48 AM) spot: ruby(abi) is in ruby-libs (10:09:17 AM) tibbs: In any case, none of this matters significantly. (10:09:31 AM) tibbs: So who will tack this request onto that bug report? (10:09:38 AM) spot: i dont think it matters that there is no php-zend package as long as something provides php-zend(abi) (10:10:24 AM) ***spot looks at Remi (10:10:28 AM) scop: if it means "the zend abi of php", ie. a property of php, then php(zend-abi) would sound better to me (10:10:49 AM) ***spot really doesn't care. (10:11:00 AM) spot: since this is new ground, break it one way or the other. :) (10:11:03 AM) RemiFedora: i can add a comment on the bug the provide php(zend-abi) ... (10:11:07 AM) ***scop really needs to go now (10:11:20 AM) spot: RemiFedora: please do (10:12:09 AM) spot: ok, i'm going to go get lunch (10:12:19 AM) spot: thanks for the time all, and happy new year (10:12:20 AM) RemiFedora: does i ask J.Orton to also provide the macros.php on FC6 ? (10:12:38 AM) spot: RemiFedora: if you want to use it in FC6. :) (10:12:58 AM) abadger1999: So long scop, spot (10:13:06 AM) tibbs: I will write up the other two items into the guidelines and present a draft to the list. (10:13:45 AM) abadger1999: RemiFedora: Could you explain what a "channel" is? (re your second email) (10:14:04 AM) RemiFedora: php.net is the main channel for extensions.. (10:14:10 AM) XulChris: abadger1999: its like a repo (10:14:57 AM) tibbs: Like every other language these days, PHP has its own packaging system. (10:15:08 AM) f13: gah (10:15:22 AM) tibbs: It's not nearly as intrusive as eggs or gems, though. (10:16:23 AM) abadger1999: k (10:16:51 AM) tibbs: XulChris: what does the channel get used for in the RPM world, though? (10:16:57 AM) tibbs: Just version checks and signatures? (10:17:13 AM) XulChris: tibbs: i use it to check for updates (10:17:32 AM) tibbs: I guess the question is, what breaks if it isn't there? (10:17:46 AM) XulChris: the install (10:18:21 AM) RemiFedora: and the build.. (in mock) (10:19:07 AM) tibbs: I guess you can't register the module if the channel isn't already set up. (10:19:19 AM) XulChris: that too (10:19:41 AM) abadger1999: f13: Group tag is a poor technology. But jeremy is against using comps to replace it. (10:19:45 AM) tibbs: But it doesn't actually contact the channel server for anything; it's just a formalism. (10:20:08 AM) tibbs: abadger1999: Just change your packages to say Group: none. (10:20:22 AM) abadger1999: f13: I liked Nicholas Mailhot's suggestion after he explained it. (10:20:35 AM) XulChris: tibbs: not sure what your asking, we need the channels as pear users expect them to be there (10:21:11 AM) tibbs: Someone asked what the channel is used for. (10:21:18 AM) tibbs: I'm trying to determine that. (10:21:48 AM) XulChris: tibbs: basically all pear commands are tied into channels (10:21:56 AM) XulChris: its pretty much ingrained into pear (10:22:10 AM) tibbs: I know, but you shouldn't be running pear in an RPM world; you can screw your system. (10:22:40 AM) XulChris: not sure i agree with that statement entirely (10:22:43 AM) RemiFedora: we must run pear in rpm scriplet... (10:22:45 AM) tibbs: Maybe not you, but Joe Random User certainly. (10:22:54 AM) f13: abadger1999: I don't recall what his suggestion was. (10:23:03 AM) f13: abadger1999: jeremy was against using comps to list every package in existance. (10:23:20 AM) XulChris: you shouldnt be using pear to install/uninstall packages that already have rpms, but pear does more than this, just type pear --help (10:23:23 AM) f13: abadger1999: however what could be accomplished by Other Tools is list the packages grouped by a comps file, then everything else in 'ungrouped' (10:23:29 AM) tibbs: XulChris: Yes, I know. (10:23:35 AM) tibbs: But it's still dangerous. (10:23:40 AM) abadger1999: f13: Exactly. And synaptic/smart/repoview would benefit greatly from listing every package in existence. (10:24:22 AM) abadger1999: Just look at the mess that is (10:24:23 AM) RemiFedora: "Joe Random User" need pear until all PHP extensions available in RPM repo. (10:24:37 AM) tibbs: RemiFedora: Yes, I know that too. (10:24:38 AM) abadger1999: If I actually want to find something in there, it's a nightmare. (10:25:08 AM) f13: abadger1999: it'll be a nightmare in whatever group gets the bulk of the "unintersting" packages too. The -devel packages, the libs that are just there for other top level packages, etc... (10:25:15 AM) XulChris: tibbs: also we are talking about joe average developer, not joe average user since pear is a developers tool (10:25:47 AM) tibbs: "pear upgrade-all" is not intended to be a developers tool as I understand things. (10:26:18 AM) abadger1999: f13: That's what makes Nicolas's proposal nice. (10:26:40 AM) XulChris: tibbs: it is, joe average user relies on joe average administrator to keep pear packages up to date (10:26:41 AM) abadger1999: He has separate ideas for categories and groups. (10:27:16 AM) tibbs: XulChris: First, that's obviously untrue. And second, administrator does not equal developer. (10:27:18 AM) abadger1999: So a package is marked as belonging to seven different categories. (10:27:27 AM) XulChris: tibbs: well you need root to run that command properly (10:27:45 AM) abadger1999: And the user is better able to tell what package they want. (10:28:14 AM) abadger1999: Meanwhile, generating a comps type file happens through the Group mechanism. (10:28:16 AM) tibbs: XulChris: Yes, and? (10:28:43 AM) tibbs: You seem to be forgetting that a large portion of desktop users have root, are not developers and are not qualified to be administrators. (10:28:57 AM) XulChris: tibbs: and there are many ways to screw your system as root, so we shouldnt bother trying to protect users (10:29:00 AM) tibbs: They should stay away from pear at all costs. (10:29:26 AM) tibbs: So if they don't need to run pear, then it's back to the original question: what is the channel for? (10:29:30 AM) RemiFedora: a large number of pear extensions are now available in Extras, we probably could patch pear to display a warning (you should try yum first) (10:29:49 AM) tibbs: You said it's because pear users expect to be there, but we've established that there should be few actual pear users. (10:29:54 AM) XulChris: tibbs: they _do_ use pear, just not for installing and uninstall pear packages that arent already rpms (10:30:17 AM) XulChris: er that *are* already rpms (10:30:19 AM) tibbs: Now you're going circular. (10:31:05 AM) abadger1999: Add a category of packages to a Group. Then the installer can see it's one package one group selection. (10:31:49 AM) abadger1999: A third party spin that concentrates on scientific applications can have a different category => Group mapping. (10:32:46 AM) XulChris: tibbs: do you modify Fedora's perl so that you cannot use its built in cpan updater? (10:33:33 AM) tibbs: Well, cpan is an add-on, not built-in, and you can package up any Perl module without registering any channels with it. (10:33:48 AM) XulChris: tibbs: thats becasue perl only has 1 channel, cpan (10:34:12 AM) tibbs: Cpan is not involved in any way whatsoever with installing Perl modules. (10:34:30 AM) XulChris: php has 2 channels, one for pear modules and one for pecl modules, with the possibility to have more channels for other repos like phpunit has now (10:34:45 AM) XulChris: tibbs: its a repository (10:34:49 AM) tibbs: You're just not getting it. (10:34:52 AM) XulChris: thats all a channel is, a repository (10:35:28 AM) tibbs: I'm asking, in the context of a world in which RPM is used for installations, what use the channels are. (10:35:49 AM) XulChris: tibbs: what use is a yum.repo.d/ file? (10:36:04 AM) tibbs: Sorry, now you're really off in left field. (10:36:14 AM) XulChris: its just a way to specify a repository (10:36:19 AM) XulChris: thats all a channel is (10:37:01 AM) RemiFedora: a channel create a "namespace" in the local pear registry, which is mandatory to build and install extensions (10:37:08 AM) tibbs: RemiFedora: Thank you. (10:37:09 AM) XulChris: if every single pear pecl and whatever file is packaged as an RPM you could agrue we do not need pear channels (10:37:32 AM) tibbs: But they still would fail to install because of lack of the appropriate namespace being set up? (10:37:45 AM) XulChris: if there was any overlap (10:37:53 AM) XulChris: like phpunit has overlap (10:38:06 AM) XulChris: but phpunit is the only case i know of that has the same name (10:48:52 AM) abadger1999: f13: Here's Nicolas's initial post: (10:49:05 AM) f13: abadger1999: I really don't have the bandwidth to discuss this right now :/ (10:49:13 AM) abadger1999: The idea was clarified throughout the thread, including into the next month. (10:49:19 AM) abadger1999: f13: k. (10:49:21 AM) scop: tibbs, CPAN.pm is very much in the business of installing Perl modules, and it's included/built in in core Perl since $forever (10:49:46 AM) abadger1999: Doesn't matter ATM. (10:49:56 AM) RemiFedora: about PHP Packaging, can we remove "Requires php" which doesn't conform to the general Guidelines (php already required by php-pear) (10:50:09 AM) tibbs: scop: you missed the point, but whatever. (10:50:28 AM) tibbs: I was indeed incorrect in thinking that it was an addon. (10:51:09 AM) RemiFedora: and use php-common when we need a "versioned" requires (10:52:00 AM) scop left the room ("Leaving"). (10:52:19 AM) f13: is there a handy shortcut for packaging man files? Just %{_mandir}/*/* ? (10:52:22 AM) XulChris: RemiFedora: i think we already discussed that a long time ago, and we decided php was only required if it was like a new version that is not available in an existing supported distro (10:52:52 AM) XulChris: so like if fc8 has a package that requires php 6, then that needs to be added, otherwise you can leave it out (10:53:15 AM) f13: isn't that what the php(abi) stuff would sort out? (10:53:19 AM) f13: should it be done automagically (10:53:22 AM) f13: like python (10:53:28 AM) XulChris: ya (10:53:34 AM) XulChris: good point (10:53:41 AM) XulChris: was that approved? i was busy earlier :) (10:53:43 AM) RemiFedora: yes, i agree, but "requires php" still a must according the PHP guidelines.. (10:55:17 AM) RemiFedora: so my comment is to remove it from the Guidelines (10:55:37 AM) tibbs: I would tend to agree. (10:56:41 AM) tibbs: For some reason I thought we intended to remove that when we added the php-abi stuff. (10:57:08 AM) RemiFedora: php(zend-api) is only for pecl extension. (10:57:21 AM) tibbs: Yes, I know that. (10:57:50 AM) tibbs: Crap, is it "api" or "abi"? I keep getting confused. (10:58:31 AM) RemiFedora: oups, i've post php(api) and php(zend-abi) ... (10:59:10 AM) RemiFedora: should i change to php(abi) and php(zend-abi) ? (10:59:38 AM) tibbs: php-api is what's provided by the current php package. (11:00:04 AM) RemiFedora: yes, and this must be keep (present for a very long time) (11:01:16 AM) tibbs: Hmm, on fc6, the php-common package provides php-api. Interesting. (11:01:48 AM) tibbs: I guess it got split, and php-common doesn't require php, either. (11:02:02 AM) RemiFedora: yes, php is the apache module, php-cli is the CLI, both requiring php-common (11:02:12 AM) tibbs: Does that influence whether we can remove Requries: php from the guidelines? (11:03:16 AM) RemiFedora: no, i don't think (11:03:18 AM) tibbs: It probably means that we need to remove it, because otherwise installing a php-module pulls in apache when perhaps you didn't want that. (11:03:59 AM) RemiFedora: a most pear/pecl extension works with php-cli (no need for a web server) (11:04:13 AM) RemiFedora: s/a/and/ (11:04:40 AM) tibbs: So, yes, it looks like we really do need to remove that bit. (11:05:30 AM) RemiFedora: and if we need a versioned requires, we should use Requires: php-common >= x.x (11:07:03 AM) RemiFedora: Ok for (provides by php-common) : php-api = %{apiver}, php(abi) = %{apiver}, php(zend-abi) = %{zendver} ?? (11:07:28 AM) tibbs: Well, let's think about it. (11:07:38 AM) tibbs: php-api is about source-level compatibility, correct? (11:08:34 AM) tibbs: As in, when they revise the interpreter so that old code no longer runs, that value will increase. (11:08:48 AM) RemiFedora: i don't really know, as it doesn't change... since 2004 (11:09:01 AM) tibbs: Wasn't the last time they broke source-level compatibility, though? (11:11:00 AM) RemiFedora: a lot of source-level compatibility (C or PHP) has been broken since 2004... (11:11:27 AM) tibbs: Crap, then that symbol is essentially useless. (11:11:46 AM) tibbs: The way Perl handles this is so much cleaner. (11:12:49 AM) tibbs: So PEAR modules should have a versioned requires on php (or php-common) (11:13:09 AM) tibbs: and PECL modules should require a specific version of php(zend-abi)? (11:13:51 AM) tibbs: The thing is, you don't want to have to rebuild every PEAR module for every minor php update. (11:15:07 AM) RemiFedora: PEAR have a requires on php-pear, no need for requires php, only if php-common >= x.x (11:15:28 AM) tibbs: FC5 has no php-common, though. (11:16:02 AM) RemiFedora: yes. And this will not cause rebuild. (11:16:50 AM) tibbs: Now I don't understand. What do you require on FC5, then? (11:16:52 AM) RemiFedora: rebuild are only requires for pecl when zend-api change (5.1 -> 5.2 p.e.) (11:17:18 AM) RemiFedora: only php-pear (11:17:45 AM) tibbs: And what happens to PEAR modules if PHP6 comes out and breaks compatibility? Modules are just silently broken? (11:17:47 AM) RemiFedora: and php > x.x is special version is needed. (11:19:10 AM) RemiFedora: difficult to know. requirement are provided by package.xml, but only for minimal version... (11:20:23 AM) tibbs: I guess in the absense of php(:MODULE_COMPAT_version) we'll just have to use php-common >= whatever and deal with problems when they happen. (11:20:34 AM) RemiFedora: yes (11:21:11 AM) tibbs: OK, so we have four possibilities: (11:21:43 AM) tibbs: FC5 PEAR modules need to require php-pear and php >= version if they require a particular version. (11:22:04 AM) tibbs: FC6+ PEAR modules need to require php-pear and php-common >= version if needed. (11:22:47 AM) tibbs: FC7+ PECL modules need to require php(zend-abi) (or is it zend-api?) = version. (11:22:57 AM) tibbs: FC5,6 PECL modules need what? (11:23:44 AM) RemiFedora: FC5 only php-api, but will not detect compatibility (11:24:04 AM) RemiFedora: FC6+ php(zend-abi) = version (11:24:25 AM) tibbs: But we may not be able to get FC6 PHP fixed to probide php(zend-abi). (11:24:57 AM) tibbs: I does currently provide php-zend-abi, though. (11:25:05 AM) RemiFedora: FC6 and FC7 already provide php-zend-api :) (11:25:15 AM) tibbs: I'd hate to need separate things for FC5, FC6 and FC7, though. (11:25:46 AM) RemiFedora: i agree, so i will also ask J.Orton for FC5 (11:25:54 AM) tibbs: And it's definitely "php-zend-abi"; nothing currently provides php-zend-api. (11:26:22 AM) tibbs: Which makes sense; this is about dlopen()ing existing binaries, hence ABI. (11:26:47 AM) RemiFedora: yes, abi, your right (11:27:20 AM) tibbs: So it's php(api) and php(zend-abi). API for the first and ABI for the second. (11:28:14 AM) RemiFedora: Like this ;) (11:30:06 AM) tibbs: Yes, looks like you got it right there. (11:30:28 AM) tibbs: So we'll see what he says and go from there. (11:35:26 AM) RemiFedora: i'm the reporter for this RFE, so i could post on the packaging ML when it goes on.. | https://fedoraproject.org/wiki/Meeting:Packaging_IRC_log_20070102?rd=Packaging:IRCLog20070102 | CC-MAIN-2017-13 | refinedweb | 6,088 | 67.79 |
I am not sure whether "norm" and "Euclidean distance" mean the same thing. Please could you help me with this distinction.
I have an
n
m
a
m
a[1,:]
np.linalg.norm
import numpy as np
a = np.array([[0, 0, 0 ,0 ], [1, 1 , 1, 1],[2,2, 2, 3], [3,5, 1, 5]])
N = a.shape[0] # number of row
pos = a[1,:] # pick out the second data point.
dist = np.zeros((N,1), dtype=np.float64)
for i in range(N):
dist[i]= np.linalg.norm(a[i,:] - pos)
A norm is a function that takes a vector as an input and returns a scalar value that can be interpreted as the "size" or "length" of that vector. Norms have some other important mathematical properties:
The Euclidean norm (also known as the L² norm) is just one of many different norms - there is also the max norm, the Manhattan norm etc. The L² norm of a single vector is equivalent to the Euclidean distance from that point to the origin, and the L² norm of the difference between two vectors is equivalent to the Euclidean distance between the two points.
As @nobar's answer says,
np.linalg.norm(x - y, ord=2) (or just
np.linalg.norm(x - y)) will give you Euclidean distance between the vectors
x and
y.
Since you want to compute the Euclidean distance between
a[1, :] and every other row in
a, you could do this a lot faster by eliminating the
for loop and broadcasting over the rows of
a:
dist = np.linalg.norm(a[1:2] - a, axis=1)
It's also easy to compute the Euclidean distance yourself using broadcasting:
dist = np.sqrt(((a[1:2] - a) ** 2).sum(1))
The fastest method is probably
scipy.spatial.distance.cdist:
from scipy.spatial.distance import cdist dist = cdist(a[1:2], a)[0]
Some timings for a (1000, 1000) array:
a = np.random.randn(1000, 1000) %timeit np.linalg.norm(a[1:2] - a, axis=1) # 100 loops, best of 3: 5.43 ms per loop %timeit np.sqrt(((a[1:2] - a) ** 2).sum(1)) # 100 loops, best of 3: 5.5 ms per loop %timeit cdist(a[1:2], a)[0] # 1000 loops, best of 3: 1.38 ms per loop # check that all 3 methods return the same result d1 = np.linalg.norm(a[1:2] - a, axis=1) d2 = np.sqrt(((a[1:2] - a) ** 2).sum(1)) d3 = cdist(a[1:2], a)[0] assert np.allclose(d1, d2) and np.allclose(d1, d3) | https://codedump.io/share/xn4Ug85gfxsI/1/is-quotnormquot-equivalent-to-quoteuclidean-distancequot | CC-MAIN-2017-09 | refinedweb | 430 | 77.03 |
Generally speaking, Scala uses “camel case” naming. That is, each word is capitalized, except possibly the first word:
UpperCamelCase lowerCamelCase
Underscores in names (
_) are not actually forbidden by the
compiler, but are strongly discouraged as they have
special meaning within the Scala syntax. (But see below
for exceptions.)
Classes/Traits
Classes should be named in upper camel case:
class MyFairLady
This mimics the Java naming convention for classes.
Objects
Object names are like class names (upper camel case).
An exception is when mimicking a package or function. This isn’t common. Example:
object ast { sealed trait Expr case class Plus(e1: Expr, e2: Expr) extends Expr ... } object inc { def apply(x: Int): Int = x + 1 }
Packages
Scala packages should follow the Java package naming conventions:
// wrong! package coolness // right! puts only coolness._ in scope package com.novell.coolness // right! puts both novell._ and coolness._ in scope package com.novell package coolness // right, for package object com.novell.coolness package com.novell /** * Provides classes related to coolness */ package object coolness { }
root
It is occasionally necessary to fully-qualify imports using
_root_. For example if another
net is in scope, then
to access
net.liftweb we must write e.g.:
import _root_.net.liftweb._
Do not overuse
_root_. In general, nested package resolves are a
good thing and very helpful in reducing import clutter. Using
_root_
not only negates their benefit, but also introduces extra clutter in and
of itself.
Methods
Textual (alphabetic) names for methods should be in lower camel case:
def myFairMethod = ...
This section is not a comprehensive guide to idiomatic method naming in Scala. Further information may be found in the method invocation section.
Accessors/Mutators
Scala does not follow the Java convention of prepending
set/
get to
mutator and accessor methods (respectively). Instead, the following
conventions are used:
- For accessors of properties, the name of the method should be the name of the property.
- In some instances, it is acceptable to prepend “`is`” on a boolean accessor (e.g.
isEmpty). This should only be the case when no corresponding mutator is provided. Please note that the Lift convention of appending “
_?” to boolean accessors is non-standard and not used outside of the Lift framework.
For mutators, the name of the method should be the name of the property with “
_=” appended. As long as a corresponding accessor with that particular property name is defined on the enclosing type, this convention will enable a call-site mutation syntax which mirrors assignment. Note that this is not just a convention but a requirement of the language.
class Foo { def bar = ... def bar_=(bar: Bar) { ... } def isBaz = ... } val foo = new Foo foo.bar // accessor foo.bar = bar2 // mutator foo.isBaz // boolean property
Unfortunately, these conventions fall afoul of the Java convention to name the private fields encapsulated by accessors and mutators according to the property they represent. For example:
public class Company { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; } }
In Scala, there is no distinction between fields and methods. In fact, fields are completely named and controlled by the compiler. If we wanted to adopt the Java convention of bean getters/setters in Scala, this is a rather simple encoding:
class Company { private var _name: String = _ def name = _name def name_=(name: String) { _name = name } }
While Hungarian notation is terribly ugly, it does have the advantage of
disambiguating the
_name variable without cluttering the identifier.
The underscore is in the prefix position rather than the suffix to avoid
any danger of mistakenly typing
name _ instead of
name_. With heavy
use of Scala’s type inference, such a mistake could potentially lead to
a very confusing error.
Note that the Java getter/setter paradigm was often used to work around a lack of first class support for Properties and bindings. In Scala, there are libraries that support properties and bindings. The convention is to use an immutable reference to a property class that contains its own getter and setter. For example:
class Company { val string: Property[String] = Property("Initial Value")
Parentheses
Unlike Ruby, Scala attaches significance to whether or not a method is declared with parentheses (only applicable to methods of arity-0). For example:
def foo1() = ... def foo2 = ...
These are different methods at compile-time. While
foo1 can be called
with or without the parentheses,
foo2 may not be called with
parentheses.
Thus, it is actually quite important that proper guidelines be observed regarding when it is appropriate to declare a method without parentheses and when it is not.
Methods which act as accessors of any sort (either encapsulating a field
or a logical property) should be declared without parentheses except
if they have side effects. While Ruby and Lift use a
! to indicate
this, the usage of parens is preferred (please note that fluid APIs and
internal domain-specific languages have a tendency to break the
guidelines given below for the sake of syntax. Such exceptions should
not be considered a violation so much as a time when these rules do not
apply. In a DSL, syntax should be paramount over convention).
Further, the callsite should follow the declaration; if declared with parentheses, call with parentheses. While there is temptation to save a few characters, if you follow this guideline, your code will be much more readable and maintainable.
// doesn't change state, call as birthdate def birthdate = firstName // updates our internal state, call as age() def age() = { _age = updateAge(birthdate) _age }
Symbolic Method Names
Avoid! Despite the degree to which Scala facilitates this area of API
design, the definition of methods with symbolic names should not be
undertaken lightly, particularly when the symbols itself are
non-standard (for example,
>>#>>). As a general rule, symbolic method
names have two valid use-cases:
- Domain-specific languages (e.g.
actor1 ! Msg)
- Logically mathematical operations (e.g.
a + bor
c :: d)
In the former case, symbolic method names may be used with impunity so
long as the syntax is actually beneficial. However, in the course of
standard API design, symbolic method names should be strictly reserved
for purely-functional operations. Thus, it is acceptable to define a
>>= method for joining two monads, but it is not acceptable to define
a
<< method for writing to an output stream. The former is
mathematically well-defined and side-effect free, while the latter is
neither of these.
As a general rule, symbolic method names should be well-understood and self documenting in nature. The rule of thumb is as follows: if you need to explain what the method does, then it should have a real, descriptive name rather than a symbols. There are some very rare cases where it is acceptable to invent new symbolic method names. Odds are, your API is not one of those cases!
The definition of methods with symbolic names should be considered an advanced feature in Scala, to be used only by those most well-versed in its pitfalls. Without care, excessive use of symbolic method names can easily transform even the simplest code into symbolic soup.
Constants, Values, Variable and Methods
Constant names should be in upper camel case. Similar to Java’s
static final
members, if the member is final, immutable and it belongs to a package
object or an object, it may be considered a constant:
object Container { val MyConstant = ... }
The value:
Pi in
scala.math package is another example of such a constant.
Method, Value and variable names should be in lower camel case:
val myValue = ... def myMethod = ... var myVariable
Type Parameters (generics)
For simple type parameters, a single upper-case letter (from the English
alphabet) should be used, starting with
A (this is different than the
Java convention of starting with
T). For example:
class List[A] { def map[B](f: A => B): List[B] = ... }
If the type parameter has a more specific meaning, a descriptive name should be used, following the class naming conventions (as opposed to an all-uppercase style):
// Right class Map[Key, Value] { def get(key: Key): Value def put(key: Key, value: Value): Unit } // Wrong; don't use all-caps class Map[KEY, VALUE] { def get(key: KEY): VALUE def put(key: KEY, value: VALUE): Unit }
If the scope of the type parameter is small enough, a mnemonic can be used in place of a longer, descriptive name:
class Map[K, V] { def get(key: K): V def put(key: K, value: V): Unit }
Higher-Kinds and Parameterized Type parameters
Higher-kinds are theoretically no different from regular type parameters
(except that their
kind is at least
*=>* rather than simply
*). The naming conventions are generally
similar, however it is preferred to use a descriptive name rather than a
single letter, for clarity:
class HigherOrderMap[Key[_], Value[_]] { ... }
The single letter form is (sometimes) acceptable for fundamental concepts
used throughout a codebase, such as
F[_] for Functor and
M[_] for
Monad.
In such cases, the fundamental concept should be something well known and understood to the team, or have tertiary evidence, such as the following:
def doSomething[M[_]: Monad](m: M[Int]) = ...
Here, the type bound
: Monad offers the necessary evidence to inform
the reader that
M[_] is the type of the Monad.
Annotations
Annotations, such as
@volatile should be in lower camel case:
class cloneable extends StaticAnnotation
This convention is used throughout the Scala library, even though it is not consistent with Java annotation naming.
Note: This convention applied even when using type aliases on annotations. For example, when using JDBC:
type id = javax.persistence.Id @annotation.target.field @id var id: Int = 0
Special Note on Brevity
Because of Scala’s roots in the functional languages, it is quite normal for local names to be very short:
def add(a: Int, b: Int) = a + b
This would be bad practice in languages like Java, but it is good practice in Scala. This convention works because properly-written Scala methods are quite short, only spanning a single expression and rarely going beyond a few lines. Few local names are used (including parameters), and so there is no need to contrive long, descriptive names. This convention substantially improves the brevity of most Scala sources. This in turn improves readability, as most expressions fit in one line and the arguments to methods have descriptive type names.
This convention only applies to parameters of very simple methods (and local fields for very simply classes); everything in the public interface should be descriptive. Also note that the names of arguments are now part of the public API of a class, since users can use named parameters in method calls. | https://docs.scala-lang.org/style/naming-conventions.html | CC-MAIN-2018-47 | refinedweb | 1,776 | 54.02 |
Date: Fri, 30 Jun 1995 19:08:28 -0500
From: Randy Terbush <randy@zyzzyva.com>
1. There seems to be a problem in content_neg WRT .var files. I will sort
this out tonight or tomorrow.
Hmmm... already sent this privately, but for anyone else who's
interested, I think this *might* improve matters:
*** mod_negotiation.c~ Sun Jun 25 17:18:03 1995
--- mod_negotiation.c Sat Jul 1 13:46:05 1995
***************
*** 809,815 ****
int is_identity_encoding (char *enc)
{
! return (!enc || !strcmp (enc, "7bit") || !strcmp (enc, "8bit")
|| !strcmp (enc, "binary"));
}
--- 809,815 ----
int is_identity_encoding (char *enc)
{
! return (!enc || !enc[0] || !strcmp (enc, "7bit") || !strcmp (enc, "8bit")
|| !strcmp (enc, "binary"));
}
2. If I reference a URL via an ISMAP, the CWD is that of 'imagemap' and
not the URL.
Could you be a bit more specific about what gets zorched to cgi-bin
instead of the directory containing the map --- is it child processes,
relative SSI <!--#include--> directives, or something else? Thanks.
rst | http://mail-archives.apache.org/mod_mbox/httpd-dev/199507.mbox/%3C9507011846.AA06708@volterra%3E | CC-MAIN-2017-17 | refinedweb | 160 | 70.09 |
SYNOPSYS
#include "mlo.h" void viewlofig(ptfig) lofig_list *ptfig;
PARAMETER
- ptfig
- Pointer to the lofig to be scaned
DESCRIPTIONviewlofig scans all the primary elements of the lofig_list loaded in ram, and displays a textual output of the data strcuture contents. The LOINS, LOCON, LOSIG and LOTRS are scaned, and their contents displayed.
Its use is mostly for debugging purposes, and educational ones, since the output is quite verbose, if very easy to understand.
EXAMPLE
#include <stdio.h> #include "mlo.h" void view_fig_to_file(ptfig) lofig_list *ptfig; { FILE *file = freopen(ptfig->NAME, WRITE_TEXT, stdout); if (!file) { (void)fputs("Can't reopen stdout!\n", stderr); EXIT(); } viewlofig(ptfig->NAME); /* to file called name */ (void)fclose(file); } | http://manpages.org/viewlofig/3 | CC-MAIN-2019-18 | refinedweb | 113 | 51.95 |
Java Heap Memory Error
Errors and exceptions are very common when working with any programming language. In Java, all objects are stored in the Heap memory, and JVM throws an OutOfMemoryError when it is unable to allocate space to an object. Sometimes this error is also called Java Heap Space Error. Let's learn more about this error.
Reasons Behind the java.lang.OutOfMemoryError
Java has a predefined maximum amount of memory that an application should take and if the application exceeds this limit then the OutOfMemory error is thrown. Let's take a look at the reasons behind this error.
- This error can occur due to poor programming practices like using inefficient algorithms, wrong data structure choices, infinite loops, not clearing unwanted resources, holding objects for too long, etc. If this is the reason behind the error then one should reconsider the choices made when writing the program.
- There can be some other reasons that are not in control of the user. For example, a third-party library that is caching strings, or a server that does not clean up after deploying. Sometimes this error can occur even if everything is fine with the heap and the objects.
- Memory Leaks can also lead to OutOfMemory errors. A memory leak is a situation when an unused object is present in the heap and cannot be removed by the Garbage Collector because it still has valid references to it. Open resources can also lead to memory leaks and they must be closed by using the finally block. The garbage collector will automatically remove all the unreferenced objects and this situation can easily be avoided by making unused objects available to the garbage collector. Additional tools are also available to detect and avoid memory leaks.
- Excessive use of finalizers can also lead to the OutOfMemoryError. Classes having a finalize method do not have their object spaces reclaimed at the time of Garbage Collection. After the garbage collection process, these objects are queued for finalization at a later time and if the finalizer thread cannot keep up with the queue then this error is thrown.
We can always increase the memory needed by JVM but if the issue is more advanced then we will eventually run out of memory. One should try to optimize the code and look for memory leaks that may cause this error.
Example: OutOfMemoryError Due to an Infinite Loop
Let's create an ArrayList of Object type and add elements to it infinitely until we run out of memory.
import java.util.ArrayList; public class OutOfMemory { public static void main(String args[]) { ArrayList<Object> list = new ArrayList<Object>(); while(true) { Object o = new Object(); list.add(o); } } }
In this example, we will eventually run out of space even if we increase the memory assigned to the application. The following image shows the error message returned by the above program.
Example: Out of Memory Because of Limited Memory
If the memory assigned to the application is less than the required memory then the OutOfMemory error is returned. We can also try to optimize our code so that its space complexity can be reduced. If we are sure that this error can be fixed if we had more memory then we can increase it by using the -Xmx command. Let's run a program that does not have the required memory.
public class OutOfMemoryError { public static void main(String[] args) { Integer[] outOfMemoryArray = new Integer[Integer.MAX_VALUE]; } }
The error message is shown in the image below.
Resolving OutOfMemoryError by Increasing Heap Space
Increasing the heap space can sometimes resolve the OutOfMemory error. If we are sure that there are no memory leaks and our code cannot be optimized further, then increasing the heap space could solve our problem. We will use the -Xmx command while running the java application to increase the heap space.
For example, the following code will not work if set a 4MB limit on the memory.
public class OutOfMemory { public static void main(String[] args) { String[] str = new String[1000000]; System.out.print("Memory Allocated"); } }
However, if we increase the limit to 8MB or anything greater then it works perfectly fine.
Frequently Asked Questions
Q. What is the default heap size in Java?
The default heap size in Java is 1GB and it is sufficient for most tasks.
Q. What is the difference between -Xmx and -Xms?
Xms defines the initial or the minimum heap size and xmx is used to set the maximum heap size. JVM will start the memory defined by -xms and will not exceed the limit set by -xmx.
Q. What is the Stack Memory?
The stack memory is a Last-In-First-Out memory used for static memory allocation and for thread execution. Whenever a new object is created, it is stored in the Heap memory and a reference to it is stored in the stack memory.
Summary
The OutOfMemoryError is a subclass of the VirtualMachineError and it is thrown when JVM does not have sufficient space to allocate new objects. This error can occur due to poor programming practices like using infinite loops or not clearing unwanted objects. This can also occur due to the presence of some third-party libraries. Memory leaks also lead to this error and we can use tools like Eclipse Memory Analyzer to fix them. Sometimes, just increasing the heap space allotted to the application solves the problem. We should also try to look for optimized algorithms so that the overall space complexity of our program is less than the maximum space available. | https://www.studytonight.com/java-examples/java-heap-memory-error | CC-MAIN-2022-05 | refinedweb | 926 | 54.22 |
This is a full frame Silverlight Application. Notice the navigation links (home and about).
Notice the forward and back in the browser works…
And there is a deep link, that navigates you back to exactly this point in the application. You can cut and paste it into a blog entry, an email or an IM to your co-works and they will be taken to exactly the same point in the app.
… no mater what browser they are using.
Now, even the best developers sometimes make errors in applications. Links that are invalid or exceptions that get thrown. The Navigation Application Template makes it super easy to deal with those. Type in a bad URL and look at the experience (be sure to run in retail).
Now, let’s go in and look a bit of customization.
First, let’s add a new page.
Right click on Views in the client project and Add New Item, Select Silverlight Page.
When the page opens, add some simple text..
<TextBlock Text="Hello World!"></TextBlock>. While all the styling is there for you to customize, we made a few of the common properties easy to find and change even for a developer.
Fit F5 and see what we have….
As you can see, my color choices aren’t great, so it is good that we are shipping a whole library of app.xaml files for you to choice from. If you just drag one of the light, clean ones.. hit F5..…
Now, how are we going to access this data from the SL client? Well, traditionally many business applications have started out as 2-tier apps. That has a number of problems around scaling and flexibility… and more to the point it just doesn’t work with the Silverlight\web client architecture.
So developers are thrust into the n-tier world. .NET RIA Services make it very easy to create scalable, flexible n-tier services that builds on WCF and ADO.NET Data Services.
These .NET RIA Services model your UI-tier application logic and encapsulate your access to varies data sources from traditional relational data to POCO (plain old CLR objects) to cloud services such as Azure, S3, etc via REST, etc. One of the great thing about this is that you can move from an on premises Sql Server to an Azure hosted data services without having to change any of your UI logic.
Let’s look at how easy it is to create these .NET RIA Services.
Right click on the server project and select the new Domain Service class
In the wizard, select your data source. Notice here we could have choosen to use a Linq2Sql class, a POCO class, etc, etc.
In the NorthwindDomainService.cs class we have stubs for all the CRUD method for accessing your data. You of course should go in and customize these for your domain. For the next few steps we are going to use GetSuperEmployees(), so I have customized it a bit.
public IQueryable<SuperEmployee> GetSuperEmployees()
{
return this.Context.SuperEmployeeSet
.Where(emp=>emp.Issues>100)
.OrderBy(emp=>emp.EmployeeID);
}
Now, let’s switch the client side. Be sure to build the solution so you can access it from the client directly. These project are linked because we selected the “ASP.NET enable” in the new project wizard.
In HomePage.Xaml add
<datagrid:DataGrid x:
<DomainService. Notice the naming convention here…
In line 2, we are databinding the grid to the SuperEmployees.. then in line 3 we are loading the SuperEmployees, that class the GetSuperEmployees() method we defined on the server. Notice this is all async of course, but we didn’t have to deal with the complexities of the async world.
The result! We get all our entries back, but in the web world,don’t we want to do paging and server side sorting and filtering? Let’s look at how to do that.
First, remove the code we just added to codebehind.
Then, to replace that, let’s add a DomainDataSource.
1: <dds:DomainDataSource x:Name="dds"
2: LoadMethodName="LoadSuperEmployees"
3: AutoLoad="True"
4::
4: <StackPanel Width="900">
5:
6: <datagrid:DataGrid x:Name="dataGrid1" Height="300"
7:
8:
9: <dataControls:DataPager PageSize="10"
10:.
The cool thing is that the ActivityControl, the DataGrid and the DataPager can all be used with any datasource such as data from WCF service, REST service, etc.
Hit F5, and you see..
Notice we are loading 20 records at a time, but showing only 10. So advancing one page is client only, but advancing again we get back to the server and load the next 20. Notice this all works well with sorting as well. And the cool thing is where is the code to handle all of this? Did i write in on the server or the client? neither. Just with the magic of linq, things compose nicely and it i just falls out.
I can early add grouping..
<dds:DomainDataSource.GroupDescriptors>
<data:GroupDescriptor
</dds:DomainDataSource.GroupDescriptors>
Let’s add filtering… First add a label and a textbox..
<StackPanel Orientation="Horizontal" Margin="0,0,0,10">
<TextBlock Text="Origin: "></TextBlock>
<TextBox x:</TextBox>
</StackPanel>
and then these filter box to our DomainDataSource….
<dds:DomainDataSource.FilterDescriptors>
<data:FilterDescriptorCollection>
<data:FilterDescriptor
<data:ControlParameter
</data:ControlParameter>
</data:FilterDescriptor>
</data:FilterDescriptorCollection>
</dds:DomainDataSource.FilterDescriptors>
When we hit F5, we get a filter box, and as we type in it we do a server side filtering of the results.
Now, suppose we wanted to make that a autocomplete box rather an a simple text box. The first thing we’d have to do is get all the options. Notice we have to get those from the server (they client might not have them all). To do this we add a method to our DomainService.
public class Origin
{
public Origin() { }
[Key]
public string Name { get; set; }
public int Count { get; set; }
}
public IQueryable<Origin> GetOrigins()
{
var q = (from emp in Context.SuperEmployeeSet
select emp.Origin).Distinct()
.Select(name => new Origin {
Name = name,
Count = Context.SuperEmployeeSet.Count
(emp => emp.Origin.Trim() == name.Trim())
});
q = q.Where(emp => emp.Name != null);
return q;
}
<input:AutoCompleteBox x:
<…
Now – that was certainly some rich ways to view data, but business apps need to update data as well. Let’s look at how to do that. First replace all the xaml below the DDS with this… it gives us a nice master-details view.
1: <StackPanel Orientation="Horizontal" Margin="10,5,10,0" >"
14: HorizontalAlignment="Left" HorizontalScrollBarVisibility="Disabled"
15: ItemsSource="{Binding Data, ElementName=dds}"
16:
17: <datagrid:DataGrid.Columns>
18: <datagrid:DataGridTextColumn
19: <datagrid:DataGridTextColumn
20: <datagrid:DataGridTextColumn
21: </datagrid:DataGrid.Columns>
22: </datagrid:DataGrid>
23:
24: <dataControls:DataPager PageSize="13":Name="dataForm1" Height="375" Width="500"
39: AutoEdit="True" AutoCommit="True"
40: VerticalAlignment="Top"
41: CommandButtonsVisibility="None"
42: Header="Product Details"
43:
44: </dataControls:DataForm>
45:
46:
47: </StackPanel>
48: </StackPanel>
49: </StackPanel>
50:
Through line 35, this is pretty much the same as what we had.
Then in line 38, we add a DataForm control that gives us some nice way to view and edit a particular entity.
This looks like the traditional master-details scenario.
Notice as we change items they are marked as “Dirty” meaning they need to be submitted back to the server. You can make edits to many items, unmake some edits and the dirty marker goes away.
Now we need to wire up the submit button.
private void Button_Click(object sender, RoutedEventArgs e)
{
dataForm1.CommitItemEdit();
dds.SubmitChanges();
}
Now, that is cool, but what about Data Validation? Well, “for free” you get type level validation (if the filed is typed as an int you and you enter a string you get an error).
Now let’s see if we can add a bit more. We do that by editing the NorthwindDomainService.metadata.cs on the server. It is important to do it on the server so that the system does all the validations are done once for a great UX experience and then again on the server for data integrity. By the time your Update method is called on your DomainService you can be sure that all the validations have been done.
Here are some of the validations we can apply..
[Bindable(true, BindingDirection.OneWay)]
public int EmployeeID;
[RegularExpression("^(?:m|M|male|Male|f|F|female|Female)$", ErrorMessage = "Gender must be 'Male' or 'Female'")]
public string Gender;
[Range(0, 10000,
ErrorMessage = "Issues must be between 0 and 1000")]
public Nullable<int> Issues;
[Required]
[StringLength(100)]
public string Name;
Just rebuilding and running the app gives us great validation in the UI and in the middle tier.
Notice how I can navigate between the errors and input focus moves to the next one.
That was updating data, what about adding new data?
To do that, let’s explorer the new Silverlight 3 ChildWindow.
Right click on Views and add new Item Child Window:
</dataControls:DataForm>
public SuperEmployee NewEmployee {get; set;}
..initializing the instance in the constructor
NewEmployee = new SuperEmployee();
NewEmployee.LastEdit = DateTime.Now.Date;
this.newEmployeeForm.CurrentItem = NewEmployee;
..handling the OK button
private void OKButton_Click(object sender, RoutedEventArgs e)
{
newEmployeeForm.CommitItemEdit();
this.DialogResult = true;
}
Great… now to see this thing we need to write up a button to show it.
<Button Content="Add New" Width="100" Height="30"
Margin="0,20,0,0"
<SelectParameters>
<asp:QueryStringParameter
</SelectParameters>
</asp:DomainDataSource>
<asp:ListView
<LayoutTemplate>
<asp:PlaceHolder<" Height="40"
Margin="10,20,0,0" HorizontalAlignment="Left"
Click="ExportToExcel_Click" >
<StackPanel Orientation="Horizontal" >
<Image Source="excelIcon.png" Height="35" Width="50"></Image>
<TextBlock Text="Export to Excel" Padding="0,10,0,0">< emp in context.SuperEmployees)
{
sw.WriteLine("<Row>");
sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Name);
sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Origin);
sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Publishers);
sw.WriteLine("<Cell><Data ss:Type=\"Number\">{0}</Data></Cell>", emp.Issues);
sw.WriteLine("</Row>");
}
while (..
You can download the full demo files and check out the running application.
I’d love to have your feedback and thoughts! | http://blogs.msdn.com/b/brada/archive/2009/03/17/mix09-building-amazing-business-applications-with-silverlight-3.aspx?CommentPosted=true | CC-MAIN-2014-15 | refinedweb | 1,679 | 57.87 |
CDataGrid Control
General
This article is about a CDataGrid control programmed using Windows SDK. It is designed to be easy to use. The current version is not totally bug free, so it would be nice if you would report all detected bugs to have an update available soon.
The grid control is very similar to a MFC CListView control when it is in the "REPORT" view state. It supports a similar item, adding and removing and with custom item sorting with an application-defined comparison function.
The Code
How to use the CDataGrid control is explained here in detail. First of all, the header file "DataGrid.h" must be included in the project. Next, a variable of type CDataGrid must be declared.
#include "DataGrid.h" CDataGrid dataGrid;
Next, call the Create method to create and show a DataGrid window, passing as parameters handle to parent window, window rectangle, and a number of columns DataGrid will have. Then, use the InsertItem method to add items to DataGrid, passing item text and alignment. A method SetItemInfo in two versions can be used to set subitem text, alignment, selection, read-only attribute, or to change background color. In the first version, the index of the item and subitem are passed, along with subitem text, alignment, or read-only attribute directly.
dataGrid.Create( wndRect, hParentWnd, 5 ); dataGrid.InsertItem( "Item1", DGTA_LEFT ); dataGrid.InsertItem( "Item2", DGTA_CENTER ); dataGrid.InsertItem( "Item3", DGTA_RIGHT ); dataGrid.SetItemInfo( 0, 1, "Subitem1", DGTA_CENTER, false ); DG_ITEMINFO dgii; dgii.dgMask = DG_TEXTRONLY; dgii.dgItem = 1; dgii.dgSubitem = 0; dgii.dgReadOnly = true; dataGrid.SetItemInfo(&dgii); dataGrid.Update();
To remove a single item, use RemoveItem, passing as the argument the index of the row that will be deleted.
dataGrid.RemoveItem(2); dataGrid.Update();
Note: All indexing is zero-numbered. Also, calling the Update method is necessary after adding or removing items.
To remove all items, use RemoveAllItems, which has no agruments.
dataGrid.RemoveAllItems();
CDataGrid control has numerous features, as explained below.
Features
These are the current features of DataGrid:
- Automatic resizing with the parent window
- Enabled when the Resize method is called each time the DataGrid parent window changes its size.
- Enable/Disable sorting
- Uses the EnableSort method.
- Enable/Disable item text editing
- Uses the EnableEdit method.
- Enable/Disable column resizing
- Uses the EnableResize method.
- Enable/Disable grid
- Uses the EnableGrid method.
- Automatic scrolling to specified item
- Uses the EnsureVisible method.
- Automatic selection of specified item
- Uses the SelectItem method.
- Item sorting using custom application-defined comparison function
- Uses the SetCompareFunction method.
- Get/Set column text color
- Uses the GetColumnTextColor and SetColumnTextColor methods.
- Get/Set column font
- Uses the GetColumnFont and SetColumnFont methods.
- Get/Set row text color
- Uses the GetRowTextColor and SetRowTextColor methods.
- Get/Set row font
- Uses the GetRowFont and SetRowFont methods.
I hope to extend this list of features as soon as possible.
Notifications
CDataGridcontrol sends the following notification massages to its parent window via a WM_COMMAND message:
- DGM_ITEMCHANGED
- When focus is changed from one item to another.
- DGM_ITEMTEXTCHANGED
- When item/subitem text is changed.
- DGM_ITEMADDED
- When item is added.
- DGM_ITEMREMOVED
- When item is removed.
- DGM_COLUMNRESIZED
- When column is resized.
- DGM_COLUMNCLICKED
- When column is clicked.
- DGM_STARTSORTING
- When sorting is started.
- DGM_ENDSORTING
- When sorting is ended.
Also, this list of notifications will be extended.
Conclusion
The user can obtain all mentioned information and some more from the well-commented header file "DataGrid.h".
Grid not visiblePosted by Tagarn on 03/16/2006 04:28am
A small fix.Posted by dinus on 10/04/2005 09:35am
Thanks again for your code. I found a small problem and I want to propose a fix for it. When I try to create DataGrid with width which doesn't match the summary width of all columns and with content less than a page, DataGrid doesn't show. Here is how to easily recreate this situation: 1. Create a parent window, with coordinates 0, 0, 500, 500, rather than using CW_USEDEFAULT (DataGrid test.cpp, line 122). 2. Create a datagird of size 0, 0, 400, 400 (RECT rect={0, 0, 400, 400};, DataGrid test.cpp, line 348). 3. Add just 3 columns of size 100. 4. Add 10 rows. After starting DataGrid test.exe you will see that Data Grid doesn't show. I was able to fix this by adding the following line: SendMessage(dataGrid.GetWindowHandle(), WM_SIZE, 400, 400); right before datagrid.Update() (DataGrid test.cpp, line 385). Regards,Reply
Very nice code.Posted by dinus on 08/20/2005 10:11am
Very nice code for those who like low level programming. Thnks.Reply | http://www.codeguru.com/cpp/controls/controls/gridcontrol/article.php/c10319/CDataGrid-Control.htm | crawl-003 | refinedweb | 755 | 61.22 |
NUnit 3.0 is the second "reset" of the NUnit project. In 2002, we moved to an entirely new codebase and began to use .NET attributes as a way of designating tests. As of December 2008, 22 releases have been produced in that codeline and the functionality has grown enormously. NUnit 2.5 is currently under way and is expected to be the last release in the 2.x series.
Beginning in 2007, it was apparent that new needed features would be difficult to add without some rethinking of several basic design issues. The result is a new project, NUnit 3.0, which will allow us to create a platform that others can more easily extend. Launchpad is the chosen site for this development because it makes it easy for people to participate.
As of August, 2009, this group holds separate projects for NUnit 2.5 and NUnit 3.0 development as well as the NUnitLite project and several supporting tools. Other projects will be added as we proceed.
Project group information
- Maintainer:
- Charlie Poole
- Driver:
- Not yet selected
- Bug tracker:
- None specified
All bugs Latest bugs reported
- Bug #1637078: Specific tests are not running in n unit console
Reported on 2016-10-27
- Bug #1480019: VS2015 Test Explorer does not run all tests when in different namespaces
Reported on 2015-07-31
- Bug #1453793: Names of tests declared in base classes should be default reflected names
Reported on 2015-05-11
- Bug #1404024: Docked Test Explorer window location not remembered
Reported on 2014-12-18
- Bug #1367312: Add iteration number to test name for repeated tests
Reported on 2014-09-09
More contributors Top contributors
- Abhilash 4 points
- Charlie Poole 1 points
Milestones
Projects
All questions Latest questions
- Category not part of project file
Posted on 2013-11-07
- NUnit result file result classification
Posted on 2013-11-07
- Looking to implement NUnit 2.6.x feature for MBUnit to NUnit Conversion
Posted on 2013-10-30
- Working with inherited fixtures and generic type restrictions
Posted on 2013-09-20
- Property attribute is missing for TestCase/TestCaseSource/ValueSouerce attrib...
Posted on 2013-08-26
All blueprints Latest blueprints
- Constraint Class
Registered on 2010-09-26
- Parameterized Tests
Registered on 2010-04-29
- Siliverlight 3.0 Build
Registered on 2009-10-20
- Generated API Documentation
Registered on 2009-10-20
- Extended Constraint Syntax for .NET 3.5
Registered on 2009-09-28 | https://launchpad.net/nunit-xtp | CC-MAIN-2017-22 | refinedweb | 402 | 51.18 |
I started).
Lines of Code shows how many lines of source code there is in your application, namespace, class or method. LoC can be used to:
My suggestion is to use LoC to monitor the size of your code units. When it comes to software estimation you may probably find better estimation methods.
There is very good book about software estimation: Software Estimation - Demystifying the Black Art by Steve McConnell. Before using LoC as silver bullet I suggest you to come back to the ground and read this book.
Jeff Atwood wrote very good posting about LoC - Diseconomies of Scale and Lines of Code. He cites Steve McConnell:
[Using software industry productivity averages], the 10,000 LOC system would require 13.5 staff months. If effort increased linearly, a 100,000 LOC system would require 135 staff months. But it actually requires 170 staff months.
[Using software industry productivity averages], the 10,000 LOC system would require 13.5 staff months. If effort increased linearly, a 100,000 LOC system would require 135 staff months. But it actually requires 170 staff months.
To get better idea about difference in linear and real estimation take look at the following chart.
I think error at size of 35 staff months is pretty horrible experience for budget, isn’t it?
One of the classic mistakes is using LoC to measure programmers productivity. It is nonsense. One complex algorithm may take about 100 lines of code but the time it takes to make it work may be equal to system that has 10000 lines of code or even more. There is heavy difference in complexity. By example, writing ASP.NET MVC application is pretty easy and straightforward compared to algorithm I mentioned.
Also how can be programmer who wrote 1000 lines of code more effective than programmer who wrote 20 lines of code and achieved same or even better functionality? I see here one more danger – why should programmers write effective and easy to manage code if their work is respected when they write much less effective and way longer code? Measuring productivity this makes strong professionals to seem as horrible ballast in team – do you really want to disrespect or even lose your main workhorses?
There are two LoC metrics and they differ by measuring method:
It turns out that logical LoC – however you measure it- is way better than physical LoC because it contains less noise.
LoC is good metric to measure size of code units. It can be also used as estimation metric but under very narrow and restrictive limits. It is something you can use when you start estimating but you have to leave it as soon as you find some more exact estimation method. You cannot use LoC to measure progress of project or productivity of programmers – don’t even think about it. :)
In my next postings I will introduce you how to measure code metrics using Visual Studio Code Analysis tools and NDepend.: - Code Metrics: Lines of Code (LoC)
Pingback from ASP.NET MVC Archived Blog Posts, Page 1
nicely written Gunnar.
Pingback from The Morning Brew - Chris Alcock » The Morning Brew #543
I don't see how the NCSS and/or NCSL acronyms could be overlooked in an article about counting code lines.
Sure I can, Dave. :) This is not the only posting about LoC and I don't want my postings to be too long.
As you have mentioned, the better way is to use it to measure the length of your methods, functions, etc to check for complexity of that method or function. However as Bill Gates has mentioned, Measuring programming progress by lines of code is like measuring aircraft building progress by weight.
My previous posting gave quick overview of code metric called Lines of Code (LoC) . In this posting I
Thank you for this post. I'll be following your series on code metrics.
In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications
Idunno.
There are other metrics to measure code quality. As far as I know, there is an industry average of 10 to 12 KSLOCs per year for a programmer, which has proved to be highly consistent across a large range of languages and project types - I think Steve McConnell mentions this in the book you mentioned.
Precisely because of the stability of productivity in terms of SLOCs I think it is an excellent measure of programmer productivity. All things being equal (I mean code quality, quality of design, bug density and so on), it is obvious that the programmer cranking out more SLOCs a year is more productive. IMO this metric is is usable the other wa y round too: if a programmer or a team is cranking outn significantly more than 1000 SLOCs per month per man, there are probably quality or technology issues there, or at least counting issues.
To talk in terms of airplane building and weight: obviously, a plane weighing more, but being built similarly well with one weighing less will be smaller. In fact, in airplane building, economies of scale do exist, and of two planes built using the same technology, probably the heavier one is more fuel efficient and can carry more weight for less money. Only, technologies have limits, and you can't go beyond a certain size using a specific technology. This comparison translated to software building would mean that a larger code base is probably doing more than a smaller one, and as such should earn more money for the company proportionally to size, if both rely on the same platform/libraries and were written using the same language.
IMO, the problem of using SLOCs as a productivity mnetric is comparing apples to apples. Of course, if you compare a sloppy codebase having 100000 SLOCs, heavily using copy/paste reuse, with a well written, well designed codebase of only 30000 SLOCs, and both to the same thing, you definitely can't say the programmers who have written the 100000 SLOCs application were more productive, but you definitely can say they worked more, while being less productive.
Which is why I think SLOCs is an important metric, which, when used properly, can help you a lot in keeping track of software development efforts.
Thanks for good comment! One little thought from McConnell: which programmer is more productive: the one who gets high SLOC building five web forms per week or programmer who is working on complex algorithm and in the end of the week he has 50 lines of code written? In first case tools will help programmer to get more lines (by example Windows Forms designer generates hell load of code). In second case programmer writes a lot of code but most of it will be deleted during work.
I think SLOCs are good metric if you know how to apply it in the context of your team. But for sure it should not be the only metric when deciding about effectiveness of your team. The number of defects is also good metric although it is not directly measurable through source code analysis. | http://weblogs.asp.net/gunnarpeipman/archive/2010/02/19/code-metrics-lines-of-code-loc.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+gunnarpeipman+%28Gunnar+Peipman%27s+ASP.NET+blog%29 | crawl-003 | refinedweb | 1,192 | 60.75 |
Hello,
I am trying to click an object whose name is span_<today’s date>_. Obviously I don’t have it in the object repository. Can I click it nonetheless ?
Thanks in advance.
Regards
How to click an object with a dynamic name?
Hello,
There should be other attributes that you might be able to use and give a unique path to the element. Saying that, we would need to see some HTML to guide you further on that.
And, you could theoretically create a String variable that contains “today’s date” and use it to click on your element. There are other forum Q & A on that for you to find.
How about:
Although you can create an element that is housed in the Object Repository, I don’t do that below. Instead, the below makes the object in code and then clicks on it.
import com.kms.katalon.core.testobject.TestObject as TestObject import com.kms.katalon.core.testobject.TestObjectProperty as TestObjectProperty import com.kms.katalon.core.testobject.ConditionType as ConditionType Date todaysDate = new Date() def today = todaysDate.format("yyyyddMM") // put your necessary date format here TestObject spanTo = new TestObject() elementName = "span_${today}_" xpath = "//*[@name='${elementName}']" spanTo.addProperty("xpath", ConditionType.EQUALS, xpath) WebUI.click(spanTo)
Thank you, I’ll try that.
Regards | https://forum.katalon.com/t/how-to-click-an-object-with-a-dynamic-name/58417 | CC-MAIN-2021-43 | refinedweb | 216 | 52.05 |
Almost all applications can make use of queues. Queues are a great way to make time intensive tasks seem immediate by sending the task into the background or into a message queue. It's great to send anything and everything into the queue that doesn't require an immediate return value (such as sending an email or firing an API call). The queue system is loaded into masonite via the
QueueProvider Service Provider.
Masonite uses pickle to serialize and deserialize Python objects when appropriate. Ensure that the objects you are serializing is free of any end user supplied code that could potentially serialize into a Python object during the deserialization portion.
It would be wise to read about pickle exploitations and ensure your specific application is protected against any avenues of attack.
All configuration settings by default are in the
config/queue.py file. Out of the box, Masonite supports 3 drivers:
async
amqp
database
The
async driver simply sends jobs into the background using multithreading. The
amqp driver is used for any AMQP compatible message queues like RabbitMQ. If you do create a driver, consider making it available on PyPi so others can also install it. The
database driver has a few additional features that the other drivers do not have if you need more fine-grained control
Jobs are simply Python classes that inherit the
Queueable class that is provided by Masonite. We can simply create jobs using the
craft job command.
$ craft job SendWelcomeEmail
This will create a new job inside
app/jobs/SendWelcomeEmail.py. Our job will look like:
from masonite.queues import Queueableclass SendWelcomeEmail(Queueable):def __init__(self):passdef handle(self):pass
We can run jobs by using the
Queue class. Let's run this job from a controller method:
from app.jobs.SendWelcomeEmail import SendWelcomeEmailfrom masonite import Queuedef show(self, queue: Queue):queue.push(SendWelcomeEmail)
That's it. This job will now send to the queue and run the
handle method.
Notice in the show method above that we passed in just the class object. We did not instantiate the class. In this case, Masonite will resolve the job constructor. All job constructors are able to be resolved by the container so we can simply pass anything we need as normal:
from masonite.queues import Queueablefrom masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):def __init__(self, request: Request, mail: Mail):self.request = requestself.mail = maildef handle(self):pass
Remember that anything that is resolved by the container is able to retrieve anything from the container by simply passing in parameters of objects that are located in the container. Read more about the container in the Service Container documentation.
We can also instantiate the job as well if we need to pass in data from a controller method. This will not resolve the job's constructor at all:
from app.jobs.SendWelcomeEmail import SendWelcomeEmailfrom masonite import Queuedef show(self, queue: Queue):var1 = 'value1'var2 = 'value2'queue.push(SendWelcomeEmail(var1, var2))
The constructor of our job class now will look like:
class SendWelcomeEmail(Queueable):def __init__(self, var1, var2):self.var1 = var1self.var2 = var2
Whenever jobs are executed, it simply executes the handle method. Because of this we can send our welcome email:
from masonite.queues import Queueablefrom masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):def __init__(self, request: Request, mail: Mail):self.request = requestself.mail = maildef handle(self):self.mail.driver('mailgun').to(self.request.user().email).template('mail/welcome').send()
That's it! This job will be loaded into the queue. By default, Masonite uses the
async driver which just sends tasks into the background.
We can also send multiple jobs to the queue by passing more of them into the
.push() method:
from app.jobs.SendWelcomeEmail import SendWelcomeEmailfrom app.jobs.TutorialEmail import TutorialEmailfrom masonite import Queuedef show(self, queue: Queue):queue.push(SendWelcomeEmail, TutorialEmail('val1', 'val2'))
Most of the time you will want to resolve the constructor but pass in variables into the
handle() method. This can be done by passing in an iterator into the
args= keyword argument:
from masonite import Queuedef show(self, queue: Queue):queue.push(SendWelcomeEmail, args=['[email protected]'])
This will pass to your handle method:
from masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):def __init__(self, request: Request, mail: Mail):self.request = requestself.mail = maildef handle(self, email):email # =='[email protected]'
You can also call any arbitrary function or method using the queue driver. All you need to do is pass the reference for it in the push method and pass any arguments you need in the args parameter like so:
def run_async(obj1, obj2):passdef show(self, queue: Queue):obj1 = SomeObject()obj2 = AnotherObject()queue.push(run_async, args=(obj1, obj2))
This will then queue this function to be called later.
Note that you will not be able to get a response value back. Once it gets sent to the queue it will run at an arbitrary time later.
The
async queue driver will allow you to send jobs into the background to run asynchronously. This does not need any third party services like the
amqp driver below.
The async driver has 2 different modes:
threading and
multiprocess. The differences between the two is that
threading uses several threads and
multiprocess uses several processes. Which mode you should use depends on the type of jobs you are processing. You should research what is best depending on your use cases.
You can change the mode inside the
config/queue.py file:
DRIVERS = {'async': {'mode': 'threading' # or 'multiprocess'},}
During development it may be hard to debug asyncronous tasks. If an exception is thrown it will be hard to catch that. It may appear that a job is never ran.
In order to combat this you can set the
blocking setting in your
config/queue.py file:
DRIVERS = {'async': {'mode': 'threading' # or 'multiprocess','blocking': True},}
Blocking bascially makes asyncronous tasks run syncronously. This will enable some reporting inside your terminal that looks something like:
GET Route: /categoriesJob Ran: <Future at 0x1032cef60 state=finished returned str>Job Ran: <Future at 0x1032f1a90 state=finished returned str>...
This will also run tasks syncronously so you can find exceptions and issues in your jobs during development.
For production this should be set to
False.
It may be good to set this setting equal to whatever your
APP_DEBUG environment variable is:
from masonite import envDRIVERS = {'async': {'mode': 'threading' # or 'multiprocess','blocking': env('APP_DEBUG')},}
This way it will always be blocking during development and automatically switch to unblocking during production.
The
amqp driver can be used to communicate with RabbitMQ services.
In order to get started with this driver you will need to install RabbitMQ on your development machine (or production machine depending on where you are running Masonite)
You can find the installation guide for RabbitMQ here.
Once you have RabbitMQ installed you can go ahead and run it. This looking something like this in the terminal if ran successfully:
$ rabbitmq-server## #### ## RabbitMQ 3.7.8. Copyright (C) 2007-2018 Pivotal Software, Inc.########## Licensed under the MPL. See ############ Logs: /usr/local/var/log/rabbitmq/[email protected]/usr/local/var/log/rabbitmq/[email protected]_upgrade.logStarting broker...completed with 6 plugins.
Great! Now that RabbitMQ is up and running we can look at the Masonite part.
Now we will need to make sure our driver and driver configurations are specified correctly. Below are the default values which should connect to your current RabbitMQ configuration. This will be in your
app/queue.py file
DRIVER = 'amqp'...DRIVERS = {'amqp': {'username': 'guest','password': 'guest','host': 'localhost','port': '5672','channel': 'default',}}
If your rabbit MQ instance requires a
vhost but doesn't have a port, we can add a
vhost and set the port to none.
vhost and
port both have the option of being
None. If you are developing locally then
vhost should likely be left out all together. The setting below will most likely be used for your production settings:
DRIVER = 'amqp'...DRIVERS = {'amqp': {'username': 'guest','vhost': '/','password': 'guest','host': 'localhost','port': None,'channel': 'default',}}
The database driver will store all jobs in a database table called
queue_jobs and on fail, will store all failed jobs in a
failed_jobs table if one exists. If the
failed_jobs table does not exist then it will not store any failed jobs and any jobs that fail will be lost.
In order to get these two queue table you can run the
queue:table command with the flag on which table you would like:
This command will create the
queue_jobs migration where you can store your jobs:
$ craft queue:table --jobs
This command will create the
failed_jobs migration where you can store your failed jobs:
$ craft queue:table --failed
Once these migrations are created you can run the migrate command:
$ craft migrate
Jobs can be easily delayed using the
database driver. Other drivers currently do not have this ability. In order to delay a job you can use a string time using the
wait keyword.
def show(self, queue: Queue):queue.push(SendWelcomeEmail, wait="10 minutes")
We can now start the worker using the
queue:work command. It might be a good idea to run this command in a new terminal window since it will stay running until we close it.
$ craft queue:work
This will startup the worker and start listening for jobs to come in via your Masonite project.
You can also specify the driver you want to create the worker for by using the
-d or
--driver option
$ craft queue:work --driver amqp
You may also specify the
channel as well.
channel may mean different things to different drivers. For the
amqp driver, the
channel is which queue to listen to. For the
database driver, the
channel is the connection to find the
queue_jobs and
queue_failed tables.
$ craft queue:work --driver database --channel sqlite
That's it! send jobs like you normally would and it will process via RabbitMQ:
from app.jobs import SomeJob, AnotherJobfrom masonite import Queue...def show(self, queue: Queue):# do your normal logicqueue.push(SomeJob, AnotherJob(1,2))
you can also specify the channel to push to by running:
queue.push(SomeJob, AnotherJob(1,2), channel="high")
Sometimes your jobs will fail. This could be for many reasons such as an exception but Masonite will try to run the job 3 times in a row, waiting 1 second between jobs before finally calling the job failed.
If the object being passed into the queue is not a job (or a class that implements
Queueable) then the job will not requeue. It will only ever attempt to run once.
Each job can have a
failed method which will be called when the job fails. You can do things like fix a parameter and requeue something, call other queues, send an email to your development team etc.
This will look something like:
from masonite.queues import Queueablefrom masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):def __init__(self, request: Request, mail: Mail):self.request = requestself.mail = maildef failed(self, payload, error):self.mail.to('[email protected]').send('The welcome email failed')
It's important to note that only classes that extend from the
Queueable class will handle being failed. All other queued objects will simply die with no failed callback.
Notice that the failed method MUST take 2 parameters.
The first parameter is the payload which tried running which is a dictionary of information that looks like this:
payload == {'obj': <class app.jobs.SomeJob>,'args': ('some_variables',),'callback': 'handle','created': '2019-02-08T18:49:59.588474-05:00','ran': 3}
and the error may be something like
division by zero.
By default, when a job is failed it disappears and cannot be ran again since Masonite does not store this information.
If you wish to store failed jobs in order to run them again at a later date then you will need to create a queue table. Masonite makes this very easy.
First you will need to run:
$ craft queue:table
Which will create a new migration inside
databases/migrations. Then you can will migrate it:
$ craft migrate
Now whenever a failed job occurs it will store the information inside this new table.
You can run all the failed jobs by running
$ craft queue:work --failed
This will get all the jobs from the database and send them back into the queue. If they fail again then they will be added back into this database table.
You can modify the settings above by specifying it directly on the job. For example you may want to specify that the job reruns 5 times instead of 3 times when it fails or that it should not rerun at all.
Specifying this on a job may look something like:
from masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):run_again_on_fail = Falsedef __init__(self, request: Request, mail: Mail):self.request = Requestself.mail = Maildef handle(self, email):...
This will not try to rerun when the job fails.
You can specify how many times the job will rerun when it fails by specifying the
run_times attribute:
from masonite.request import Requestfrom masonite import Mailclass SendWelcomeEmail(Queueable):run_times = 5def __init__(self, request: Request, mail: Mail):self.request = Requestself.mail = Maildef handle(self, email):... | https://docs.masoniteproject.com/v/v2.2/useful-features/queues-and-jobs | CC-MAIN-2020-16 | refinedweb | 2,203 | 56.66 |
Google Wave: Loosely coupling IM to everything
Skype soared when it wrapped IM around Internet voice calls. The familiar around the difficult.
IM brings profiles, contact lists, presence, people search, and messages. Google Wave lets you build those into any software.
Skype gained power by creating an IM namespace (the list of people who use Skype) that tightly coupled IM with more things to do (voice, video, file transfer, gameplay). Wave creates the same value but with loose coupling. Third parties have added voice conferencing, video conferencing, games. Third parties will be able to tie in their own namespaces (people@mycompany.com) and their own apps.
I fully expect to see Wave built into web sites, mobile apps, desktop clients, smart devices. Anything where you expect people (or things) to talk with people.
Loose coupling has its weaknesses. Incompatibility with extensions (we have to agree on which video plug-in to use in a conversation), no single point of contact to resolve problems, difficulty upgrading the entire network, and social issues like privacy and spam.
Wave supporters argue that the Wave protocol's weaknesses are strengths. That loose coupling provides for greater innovation in a marketplace of ideas (like software and music). That no single point of contact means no single point of failure, and no centralized control (like email).
It's still early, but now is when companies like Skype and Yahoo! and Microsoft and Cisco need to formulate something other than a wait-and-see strategy. Wave is as intensely viral and engaging as email. So you want to either jump on Wave's bandwagon or have your counter force strategy deployed before Wave hits tipping points and crosses the chasm from geek pioneers to the mainstream.
tags: skype, wave, google, googlewave
Call me at +1-510-316-9773, Skype me, follow @skypejournal and @Phil Wolff.
Visit our Skype Journal private roundtable, one of the longest running public Skype chats.
Labels: competition, google, skype, strategy, technology
2 Comments:
Phil, again you are spot on. Regarding crossing the chasm from Geek > Main Street, I believe that has already started to occur as companies such as Salesforce, SAP and others have begun to apply it to real-world issues -
look who's online ;-)
We've started to moderate comments to avoid spam. Please excuse the short delay. We'll get your post online a quickly as possible.
Links to this post:
Create a Link | http://skypejournal.com/2009/10/google-wave-loosely-coupling-im-to.html | crawl-003 | refinedweb | 404 | 65.32 |
.
While often our data can be well represented by a homogeneous array of values, sometimes this is not the case. This section demonstrates the use of NumPy's structured arrays and record arrays, which provide efficient storage for compound, heterogeneous data. While the patterns shown here are useful for simple operations, scenarios like this often lend themselves to the use of Pandas
Dataframes, which we'll explore in Chapter 3.
import numpy as np
Imagine that we have several categories of data on a number of people (say, name, age, and weight), and we'd like to store these values for use in a Python program. It would be possible to store these in three separate arrays:
name = ['Alice', 'Bob', 'Cathy', 'Doug'] age = [25, 45, 37, 19] weight = [55.0, 85.5, 68.0, 61.5]
But this is a bit clumsy. There's nothing here that tells us that the three arrays are related; it would be more natural if we could use a single structure to store all of this data. NumPy can handle this through structured arrays, which are arrays with compound data types.
Recall that previously we created a simple array using an expression like this:
x = np.zeros(4, dtype=int)
We can similarly create a structured array using a compound data type specification:
# Use a compound data type for structured arrays data = np.zeros(4, dtype={'names':('name', 'age', 'weight'), 'formats':('U10', 'i4', 'f8')}) print(data.dtype)
[('name', '<U10'), ('age', '<i4'), ('weight', '<f8')]
Here
'U10' translates to "Unicode string of maximum length 10,"
'i4' translates to "4-byte (i.e., 32 bit) integer," and
'f8' translates to "8-byte (i.e., 64 bit) float."
We'll discuss other options for these type codes in the following section.
Now that we've created an empty container array, we can fill the array with our lists of values:
data['name'] = name data['age'] = age data['weight'] = weight print(data)
[('Alice', 25, 55.0) ('Bob', 45, 85.5) ('Cathy', 37, 68.0) ('Doug', 19, 61.5)]
As we had hoped, the data is now arranged together in one convenient block of memory.
The handy thing with structured arrays is that you can now refer to values either by index or by name:
# Get all names data['name']
array(['Alice', 'Bob', 'Cathy', 'Doug'], dtype='<U10')
# Get first row of data data[0]
('Alice', 25, 55.0)
# Get the name from the last row data[-1]['name']
'Doug'
Using Boolean masking, this even allows you to do some more sophisticated operations such as filtering on age:
# Get names where age is under 30 data[data['age'] < 30]['name']
array(['Alice', 'Doug'], dtype='<U10')
Note that if you'd like to do any operations that are any more complicated than these, you should probably consider the Pandas package, covered in the next chapter.
As we'll see, Pandas provides a
Dataframe object, which is a structure built on NumPy arrays that offers a variety of useful data manipulation functionality similar to what we've shown here, as well as much, much more.
np.dtype({'names':('name', 'age', 'weight'), 'formats':('U10', 'i4', 'f8')})
dtype([('name', '<U10'), ('age', '<i4'), ('weight', '<f8')])
For clarity, numerical types can be specified using Python types or NumPy
dtypes instead:
np.dtype({'names':('name', 'age', 'weight'), 'formats':((np.str_, 10), int, np.float32)})
dtype([('name', '<U10'), ('age', '<i8'), ('weight', '<f4')])
A compound type can also be specified as a list of tuples:
np.dtype([('name', 'S10'), ('age', 'i4'), ('weight', 'f8')])
dtype([('name', 'S10'), ('age', '<i4'), ('weight', '<f8')])
If the names of the types do not matter to you, you can specify the types alone in a comma-separated string:
np.dtype('S10,i4,f8')
dtype([('f0', 'S10'), ('f1', '<i4'), ('f2', '<f8')])
The shortened string format codes may seem confusing, but they are built on simple principles.
The first (optional) character is
< or
>, which means "little endian" or "big endian," respectively, and specifies the ordering convention for significant bits.
The next character specifies the type of data: characters, bytes, ints, floating points, and so on (see the table below).
The last character or characters represents the size of the object in bytes.
tp = np.dtype([('id', 'i8'), ('mat', 'f8', (3, 3))]) X = np.zeros(1, dtype=tp) print(X[0]) print(X['mat'][0])
(0, [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]
Now each element in the
X array consists of an
id and a $3\times 3$ matrix.
Why would you use this rather than a simple multidimensional array, or perhaps a Python dictionary?
The reason is that this NumPy
dtype directly maps onto a C structure definition, so the buffer containing the array content can be accessed directly within an appropriately written C program.
If you find yourself writing a Python interface to a legacy C or Fortran library that manipulates structured data, you'll probably find structured arrays quite useful!
NumPy also provides the
np.recarray class, which is almost identical to the structured arrays just described, but with one additional feature: fields can be accessed as attributes rather than as dictionary keys.
Recall that we previously accessed the ages by writing:
data['age']
array([25, 45, 37, 19], dtype=int32)
If we view our data as a record array instead, we can access this with slightly fewer keystrokes:
data_rec = data.view(np.recarray) data_rec.age
array([25, 45, 37, 19], dtype=int32)
The downside is that for record arrays, there is some extra overhead involved in accessing the fields, even when using the same syntax. We can see this here:
%timeit data['age'] %timeit data_rec['age'] %timeit data_rec.age
1000000 loops, best of 3: 241 ns per loop 100000 loops, best of 3: 4.61 µs per loop 100000 loops, best of 3: 7.27 µs per loop
Whether the more convenient notation is worth the additional overhead will depend on your own application.
This section on structured and record arrays is purposely at the end of this chapter, because it leads so well into the next package we will cover: Pandas. Structured arrays like the ones discussed here are good to know about for certain situations, especially in case you're using NumPy arrays to map onto binary data formats in C, Fortran, or another language. For day-to-day use of structured data, the Pandas package is a much better choice, and we'll dive into a full discussion of it in the chapter that follows. | https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/numpy/02.09-Structured-Data-NumPy.ipynb | CC-MAIN-2019-13 | refinedweb | 1,094 | 59.53 |
Hi Herb,
There isn’t a built-in equivalent of GetFileBasename() in Geomatica for Python. Python has a large number of built-in functions, and has built-in functions that can be applied to return the base name without the extension.
In the example below, there are two functions you’ll need to use, basename and split. Basename will return the file name with the extension, and split will partition the string into a list using a specified delimiter.
import os data = r'c:\data\scene_01.pix' base = os.path.basename(data) print base scene_01.pix print base.split('.') ['scene_01', 'pix'] print base.split('.')[0] scene_01
You can also group these functions together into a single line.
os.path.basename(data).split('.')[0]
I hope this helps. If you have anymore questions don't hesitate to ask. Good luck with Python! | https://support.pcigeomatics.com/hc/en-us/community/posts/204352936-EASI-GetFileBasename-in-Python | CC-MAIN-2019-22 | refinedweb | 141 | 70.6 |
Get Help:Ask a Question in our Forums|Report a Bug|More Help Resources
|
Join
The ASP.NET AJAX Library includes a rich framework to simplify client programming. This lists the ASP.NET AJAX Library namespaces.
Contains members and types that extend base ECMAScript (JavaScript) objects and that provide members that are more familiar to .NET developers. Includes extensions for the JavaScript Array, Boolean, Error, Number, Object, and String types.
Represents the root namespace for the Microsoft AJAX Library, which contains all fundamental classes and base classes.
Contains classes that manage facilities for querying and working with data.
Contains types related to communication between ASP.NET AJAX client applications and Web services on the server.
Contains types related to data serialization for ASP.NET AJAX client applications
Contains types that provide client script access to ASP.NET authentication service, profile service, and other application services.
Contains types related to user interface (UI), such as controls, events, and UI properties in the Microsoft AJAX Library.
Contains types related to partial-page rendering in the Microsoft AJAX Library. | http://www.asp.net/ajaxlibrary/Reference.ashx | crawl-003 | refinedweb | 176 | 51.75 |
I receive a JSON with 30 fields, and my entity is built from this JSON.
The problem is: two fields shouldn't be updated (two dates).
If I use
entity.merge
This article explains in great details your question, but I'm going to summarize it here as well.
If you never want to update those two fields, you can mark them with
updatable=false:
@Column(name="CREATED_ON", updatable=false) private Date createdOn;
Once you load an entity and you modify it, as long as the current
Session or
EntityManager is open, Hibernate can track changes through the dirty checking mechanism. Then, during flush, an
SQL update will be executed.
If you don't like that all columns are included in the
UPDATE statement, you can use dynamic update:
@Entity @DynamicUpdate public class Product { //code omitted for brevity }
Then, only the modified columns will be included in the
UPDATE statement. | https://codedump.io/share/Zao3loOR6b91/1/how-to-update-only-a-part-of-all-entity-attributes-with-hibernate | CC-MAIN-2017-30 | refinedweb | 150 | 59.33 |
Editor's note: Swing Hacks is not just about visual trickery, as this excerpt illustrates. The book's purpose is to enable developers to deliver more compelling desktop applications with Java, and this hack is an example of that, working not with the visuals of a
JTable, but the model behind it. By leveraging the JDBC support provided by J2SE, you can map a database table into a Swing
TableModel, which then lets you expose it as a
JTable. Read on for the nitty-gritty.
Bring your database tables into Swing with a minimum of hassle.
If you've worked with databases, you've probably also worked with the tools they provide for quick table maintenance and queries: command-line tools that are well suited to brief hack-and-slash work, but hard to work with once you start dealing with any serious amount of data. It's hard enough to write the SQL command to return 10 or 20 columns in a query—it's even worse when the results word-wrap over the course of a dozen lines, and you can't tell where one result ends and another begins.
Wouldn't it be nice to be able to throw the contents of any database table into a Swing
JTable? Give it a few JDBC strings, toss it in a
JFrame, and pow!—instant GUI.
If you've worked with both JDBC and Swing, you'll grasp the concept in one sentence: use table metadata to build a Swing
TableModel from the database table. If you haven't, here's the background you'll need: JDBC provides an abstract means of accessing databases. Java code to work with one database should work with another, the only difference is in the way that JDBC achieves a
Connection to the database, which is usually a matter of providing
Strings for:
A driver class, which provides implementations of the various
java.sql interfaces.
A URL with which to connect to the database. This implies the use of sockets, though that's not necessarily the case. Some small embeddable databases can live in the same JVM as your application.
An optional username.
An optional password.
Once you have the
Connection, you can begin to send commands (creation, deletion, and altering of tables) or queries to the database by creating
Statements from the
Connection. You can also use the
Connection to get metadata about the database, like what kinds of features it supports, how long certain strings can be, etc. More importantly for this hack, it allows you to discover what tables are in the database, what columns they have, and what types of data are in those columns.
So, given just a
Connection and the name of a table in the database, you can build a Java representation of its contents with two queries. The first query gets column metadata for the table and builds up arrays of the column names and their types. These can be mapped reasonably well to Java classes, at least for whatever types you intend to support. The second query gets all the data from the table. For each row, it gets each column's value. This is put into a two-dimensional array, which represents the entire contents of the table.
With these two queries done, you have everything you need to support the abstract methods of
AbstractTableModel:
getRowCount() is the length of the
contents array that you create.
getColumnCount() is 0 if you have no contents, or the length of the first item in the contents array (which is itself an array because contents is a two-dimensional array).
getValueAt() is the value at
contents[row][col].
AbstractTableModel has utterly trivial implementations of
getColumnClass() and
getColumnName(), so the first always returns
Object.class, the second returns "A", "B", "C", etc.; holding onto column metadata from the first query allows you to provide more useful implementations of these methods, too.
Example 3-12 shows how the
JDBCTableModel is implemented.
import javax.swing.*; import javax.swing.table.*; import java.sql.*; import java.util.*; /** an immutable table model built from getting metadata about a table in a jdbc database */ public class JDBCTableModel extends AbstractTableModel { Object[][] contents; String[] columnNames; Class[] columnClasses; public JDBCTableModel (Connection conn, String tableName) throws SQLException { super(); getTableContents (conn, tableName); } protected void getTableContents (Connection conn, String tableName) throws SQLException { // get metadata: what columns exist and what // types (classes) are they? DatabaseMetaData meta = conn.getMetaData(); System.out.println ("got meta = " + meta); ResultSet results = meta.getColumns (null, null, tableName, null) ; System.out.println ("got column results"); ArrayList colNamesList = new ArrayList(); ArrayList colClassesList = new ArrayList(); while (results.next()) { colNamesList.add (results.getString ("COLUMN_NAME")); System.out.println ("name: " + results.getString ("COLUMN_NAME")); int dbType = results.getInt ("DATA_TYPE"); switch (dbType) { case Types.INTEGER: colClassesList.add (Integer.class); break; case Types.FLOAT: colClassesList.add (Float.class); break; case Types.DOUBLE: case Types.REAL: colClassesList.add (Double.class); break; case Types.DATE: case Types.TIME: case Types.TIMESTAMP: colClassesList.add (java.sql.Date.class); break; default: colClassesList.add (String.class); break; }; System.out.println ("type: " + results.getInt ("DATA_TYPE")); } columnNames = new String [colNamesList.size()]; colNamesList.toArray (columnNames); columnClasses = new Class [colClassesList.size()]; colClassesList.toArray (columnClasses); // get all data from table and put into // contents array Statement statement = conn.createStatement (); results = statement.executeQuery ("SELECT * FROM " + tableName); ArrayList rowList = new ArrayList(); while (results.next()) { ArrayList cellList = new ArrayList(); for (int i = 0; i<columnClasses.length; i++) { Object cellValue = null; if (columnClasses[i] == String.class) cellValue = results.getString (columnNames[i]); else if (columnClasses[i] == Integer.class) cellValue = new Integer ( results.getInt (columnNames[i])); else if (columnClasses[i] == Float.class) cellValue = new Float ( results.getInt (columnNames[i])); else if (columnClasses[i] == Double.class) cellValue = new Double ( results.getDouble (columnNames[i])); else if (columnClasses[i] == java.sql.Date.class) cellValue = results.getDate (columnNames[i]); else System.out.println ("Can't assign " + columnNames[i]); cellList.add (cellValue); }// for Object[] cells = cellList.toArray(); rowList.add (cells); } // while // finally create contents two-dim array contents = new Object[rowList.size()] []; for (int i=0; i<contents.length; i++) contents[i] = (Object []) rowList.get (i); System.out.println ("Created model with " + contents.length + " rows"); // close stuff results.close(); statement.close(); } // AbstractTableModel methods public int getRowCount() { return contents.length; } public int getColumnCount() { if (contents.length == 0) return 0; else return contents[0].length; } public Object getValueAt (int row, int column) { return contents [row][column]; } // overrides methods for which AbstractTableModel // has trivial implementations public Class getColumnClass (int col) { return columnClasses [col]; } public String getColumnName (int col) { return columnNames [col]; } }
The constructor dumps off its real work to
getTableContents(), which is responsible for the two queries just described. It gets a
DatabaseMetaData object from the
Connection, from which you can then get the column data with a
getColumns() call. The arguments to this method are the catalog, schema pattern, table name pattern, and column name pattern; this implementation ignores catalogs and schema, although you might need to have callers specify them if you have a complex database.
getColumns() returns a
ResultSet, which you iterate over just like you would with the results of a regular JDBC query.
Getting the column name is easy: just call
getString("COLUMN_NAME"). The type is a little more interesting, as the
getInt("DATA_TYPE") call will return an
int, which represents one of the constants of the
java.sql.Types class. In this example, I've simply mapped
Strings and the basic number types to appropriate Java classes.
TIMESTAMP is SQL's concept of a point in time (a
DATE and a
TIME), so it gets to be a Java Date. Knowing these types will make it easier to call the right
getXXX() method when retrieving the actual table data.
The second query is a simple SELECT
* FROM tableName. With no
WHERE restriction on the query, this will create a
ResultSet with every row in the table. I shouldn't have to mention that if
tableName is a table with millions of records, your resulting
TableModel is not going to fit into memory. You knew that, right?
Again, you need to iterate over a
ResultSet. Each time that
results.next() returns true, meaning there's another result, you pull out every column you know about from the earlier metadata query. This means calling a
getXXX() method and passing in the column name, where you know which
getXXX() to use from your earlier investigation of the type of each column. You can go ahead and put numeric data into its proper wrapper class (
Integer, Double, etc.) because that works well with the class-based rendering system of
JTables. A caller might decide to use a
TableCellRenderer that applies a
Format class to all
Doubles in the table to display them only to a certain number of decimal points, or to render Dates with relative terms like "Today" and "25 hours ago." Strongly typing the data in your model will help with that.
With the queries done, you just convert the
ArrayLists to real arrays (which offer quick lookups for the get methods). The implementations of the
AbstractTableModel methods mentioned previously, as well as the improved implementations of
getColumnClass() and
getColumnName(), are trivial uses of the
columnNames, columnClasses, and
contents arrays built up by this method.
Before you say "I can't run this hack, I don't have a database," relax! The open source world has you covered. And no, it's not some big thing like JBoss. HSQLDB, more commonly known by its old name, Hypersonic, is a JDBC relational database engine written in Java. It is really small and can be run as a standalone server or within your JVM. If you are database-less, grab HSQLDB from.
Whatever your database, you'll need a driver classname, URL, username, and password to make a connection to the database. If you have your own database, I trust you already know this. If you just downloaded HSQLDB one paragraph ago, then you'll be using the following information:
Driver: org.hsqldb.jdbcDriver
URL: jdbc:hsqldb:file:testdb
User: sa
Password: (none)
This assumes you'll be running Hypersonic as part of your application, meaning you'll need to extend your classpath to pick up the hsqldb.jar file. Also note that this will create some testdb files in your current directory that you can clean up when done. You can also provide a full path to some other directory; see HSQLDB's docs for more info.
The test runner expects to pick up the connection strings as properties named
jdbctable.driver, jdbctable.url, jdbctable.user, and
jdbctable.pass. To make things easier, there are two ways to pass these in: either as system properties (usually specified with
-D arguments to the java command), or in a file called jdbctable.properties. The book code has a sample of the latter with HSQLDB values as defaults.
To test the
JDBCTableModel, the
TestJDBCTable creates an entirely new table in the database. The model gets the
Connection and the name of this table and loads the data from the database. Then the test class simply creates a new
JTable from the model and puts it in a
JFrame. Example 3-13 shows the source for this demo.
import javax.swing.*; import javax.swing.table.*; import java.sql.*; import java.util.*; import java.io.*; public class TestJDBCTable { public static void main (String[] args) { try { /* driver, url, user, and pass can be passed in as system properties "jdbctable.driver", "jdbctable.url", "jdbctable.user", and "jdbctable.pass", or specified in a file called "jdbctable.properties" in current directory */ Properties testProps = new Properties(); String ddriver = System.getProperty ("jdbctable.driver"); String durl = System.getProperty ("jdbctable.url"); String duser = System.getProperty ("jdbctable.user"); String dpass = System.getProperty ("jdbctable.pass"); if (ddriver != null) testProps.setProperty ("jdbctable.driver", ddriver); if (durl != null) testProps.setProperty ("jdbctable.url", durl); if (duser != null) testProps.setProperty ("jdbctable.user", duser); if (dpass != null) testProps.setProperty ("jdbctable.pass", dpass); try { testProps.load (new FileInputStream ( new File ("jdbctable.properties"))); } catch (Exception e) {} // ignore FNF, etc. System.out.println ("Test Properties:"); testProps.list (System.out); // now get a connection // note care to replace nulls with empty strings Class.forName(testProps.getProperty ("jdbctable.driver")).newInstance(); String url = testProps.getProperty ("jdbctable.url"); url = ((url == null) ? "" : url); String user = testProps.getProperty ("jdbctable.user"); user = ((user == null) ? "" : user); String pass = testProps.getProperty ("jdbctable.pass"); pass = ((pass == null) ? "" : pass); Connection conn = DriverManager.getConnection (url, user, pass); // create db table to use String tableName = createSampleTable(conn); // get a model for this db table and add to a JTable TableModel mod = new JDBCTableModel (conn, tableName); JTable jtable = new JTable (mod); JScrollPane scroller = new JScrollPane (jtable, ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED, ScrollPaneConstants.HORIZONTAL_SCROLLBAR_AS_NEEDED); JFrame frame = new JFrame ("JDBCTableModel demo"); frame.getContentPane().add (scroller); frame.pack(); frame.setVisible (true); conn.close(); } catch (Exception e) { e.printStackTrace(); } } public static String createSampleTable (Connection conn) throws SQLException { Statement statement = conn.createStatement(); // drop table if it exists try { statement.execute ("DROP TABLE EMPLOYEES"); } catch (SQLException sqle) { sqle.printStackTrace(); // if table !exists } statement.execute ("CREATE TABLE EMPLOYEES " + "(Name CHAR(20), Title CHAR(30), Salary INT)"); statement.execute ("INSERT INTO EMPLOYEES VALUES " + "('Jill', 'CEO', 200000 )"); statement.execute ("INSERT INTO EMPLOYEES VALUES " + "('Bob', 'VP', 195000 )"); statement.execute ("INSERT INTO EMPLOYEES VALUES " + "('Omar', 'VP', 190000 )"); statement.execute ("INSERT INTO EMPLOYEES VALUES " + "('Amy', 'Software Engineer', 50000 )"); statement.execute ("INSERT INTO EMPLOYEES VALUES " + "('Greg', 'Software Engineer', 45000 )"); statement.close(); return "EMPLOYEES"; } }
The
createSampleTable() method is something you could rewrite to insert your own types and values easily. In fact, because it returns the name of the table you've created, you could create many different tables in your database and test out how the model handles them. Or, use a loop to create lots of rows and see how long it takes to load them.
At any rate, when run, the
TestJDBCTable produces a
JFrame with the database table's contents, as seen in Figure 3-9.
Figure 3-9. JTable populated from a database. | http://www.oreillynet.com/lpt/a/6292 | CC-MAIN-2014-15 | refinedweb | 2,308 | 58.69 |
Hey everyone. Before i start i'm writing up this code for my mothers employee database. I have attached two files. An example of how it's going to be run through a picture and the employee database which is a .txt file containing data such as first name, last name etc. I have started the code, and being new to this i came across a problem. I'm having trouble opening the employee.txt file to display like the example in the picture attached. I'm using a switch to call the function but had no luck. Atm i have the menu set up and i really just want to get the "alphabetized listing" working. I was told to also sort it alphabetically, but even if i can get the .txt to open in my program is more than enough help, i can work around it from there. Thanks.
Here's my code.
#include <iostream> #include <string> #include <iomanip> #include <stdlib.h> #include <fstream> using namespace std; // STRUCT struct HRindex { int dob_y, dob_m, dob_d; int start_y, start_m, start_d; int pay_rate; int exemptions; string first_name, last_name, ssn; int item_code; }; ofstream outputFile; bool outputFileIsEmpty = true; const char* fileName = "employee.txt"; // FUNCTION PROTOTYPE void menu(); void readAllOfFile ( ifstream & ); void displayRec ( HRindex & ); void searchRec ( ifstream & ); HRindex readFile ( ifstream &file ); void menu () { int choice=5; ifstream inputFile; while (choice !=0) { cout <<endl; cout <<endl; cout <<"This program reads a data file of employee data, displays a menu of "<<endl; cout <<"choices, and displays the report selected."<<endl; cout <<endl; cout <<endl; cout <<" ------------------------------------ "<<endl; cout <<" MENU "<<endl; cout <<" ==================================="<<endl; cout <<endl; cout <<" 1. Alphabetizd Listing "<<endl; cout <<endl; cout <<" 2. Show the record for an employee "<<endl; cout <<endl; cout <<" 3. Budget for employee pay increases "<<endl; cout <<endl; cout <<" 4. Retirement elegibility "<<endl; cout <<endl; cout <<" 0. Exit "<<endl; cout <<endl; cout <<endl; cout <<"Enter your choice := "; cin >>choice; switch (choice) { case 1: // call function to open and sort employee.txt? //readAllOfFile ( ifstream & ); break; case 2: break; case 3: cout <<endl; cout <<endl; cout <<"hello 2 "<<endl; break; case 4: cout <<endl; cout <<endl; cout <<"hello 3 "<<endl; break; default : // pops up for wrong user selection system("cls"); } } cin.get(); } void readAllOfFile( ifstream &file ) { cout <<endl; cout <<endl; // displays the report heading cout << "Employee "<<"SSN "<<"DOB " <<"Start Date "<<"Pay Rate "<<"Exemptions "<<endl; cout <<endl; cout << "====================================================================================="<<endl; cout <<endl; while ( !file.eof() ) { // calls the readfile functnio under the display function displayRec( readFile( file ) ); cout << endl; } return; } HRindex readFile ( ifstream &file ) { HRindex newHRindex; file >> newHRindex.first_name; file >> newHRindex.last_name; file >> newHRindex.ssn; file >> newHRindex.dob_y; file >> newHRindex.dob_m; file >> newHRindex.dob_d; file >> newHRindex.start_y; file >> newHRindex.start_m; file >> newHRindex.start_d; file >> newHRindex.pay_rate; file >> newHRindex.exemptions; return newHRindex; } void displayRec ( HRindex &newHRindex ) { cout <<endl; cout <<newHRindex.first_name<<" "<<newHRindex.last_name <<" "<< newHRindex.ssn <<" " <<endl;//<<newHRindex.dob_y <<" "<<newHRindex.dob_m <<" "<<newHRindex.dob_d<<" "<<newHRindex.start_y<<" "<<newHRindex.start_m<<" "<<newHRindex.start_d<" "<<newHRindex.pay_rate<" "<<newHRindex.exemptions <<endl; return; } //---------------------------------------------- // main menu int main () { // calls the menu function menu(); cin.get(); return 0; } | https://www.daniweb.com/programming/software-development/threads/115376/opening-txt-file-and-sorting | CC-MAIN-2016-50 | refinedweb | 497 | 76.11 |
Here is my understanding of fork():
The fork() system call spins-off a child process from the parent process. The core-image(address-space) of the child process is an exact copy of that of the parent. The address space contains:
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid_t pid;
pid=fork();
int a=5;
if(pid<0){/*error condition*/
printf("Error forking\n");
}
if(pid==0){/*child process*/
printf("Child process here\n");
a=a+5;
printf("The value of a is %d\n",a);
printf("The address of a is %p\n",&a);
printf("Child terminated\n");
exit(getpid()); /*child terminates*/
}
else{/*parent process*/
printf("Parent blocked\n");
wait(NULL); /*waiting for child process to exit*/
printf("Parent process here");
printf("The value of a is %d\n",a);
printf("The address of a is %p\n",&a);
printf("parent terminated");
}
}
Parent blocked
Child process here
The value of a is 10
The address of a is 0x7ffe4c37b1a0
Child terminated
Parent process hereThe value of a is 5
The address of a is 0x7ffe4c37b1a0
Not so.
The addresses seen by both the child and the parent are relative to their own address spaces, not relative to the system as a whole.
The operating system maps the memory used by each process to a different location in physical (or virtual) memory. But that mapping is not visible to the processes. | https://codedump.io/share/LdancRp0FV1N/1/copy-on-write-during-fork | CC-MAIN-2017-26 | refinedweb | 236 | 58.21 |
Controllers
Phoenix controllers act as intermediary modules. Their functions - called actions - are invoked from the router in response to HTTP requests. The actions, in turn, gather all the necessary data and perform all the necessary steps before invoking the view layer to render a template or returning a JSON response.
Phoenix controllers also build on the Plug package, and are themselves plugs. Controllers provide the functions to do almost anything we need to in an action. If we do find ourselves looking for something that Phoenix controllers don’t provide; however, we might find what we’re looking for in Plug itself. Please see the Plug Guide or Plug Documentation for more information.
A newly generated Phoenix app will have a single controller, the
PageController, which can be found at
lib/hello_web/controllers/page_controller.ex and looks like this.
defmodule HelloWeb.PageController do use HelloWeb, :controller def index(conn, _params) do render conn, "index.html" end end
The first line below the module definition invokes the
__using__/1 macro of the
HelloWeb module, which imports some useful modules.
The
PageController gives us the
index action to display the Phoenix welcome page associated with the default route Phoenix defines in the router.
Actions
Controller actions are just functions. We can name them anything we like as long as they follow Elixir’s naming rules. The only requirement we must fulfill is that the action name matches a route defined in the router.
For example, in
lib/hello_web/router.ex we could change the action name in the default route that Phoenix gives us in a new app from index:
get "/", PageController, :index
To test:
get "/", PageController, :test
As long as we change the action name in the
PageController to
test as well, the welcome page will load as before.
defmodule HelloWeb.PageController do . . . def test(conn, _params) do render conn, "index.html" end end
While we can name our actions whatever we like, there are conventions for action names which we should follow whenever possible. We went over these in the Routing Guide, but we’ll take another quick look here.
-
Each of these actions takes two parameters, which will be provided by Phoenix behind the scenes.
The first parameter is always
conn, a struct which holds information about the request such as the host, path elements, port, query string, and much more.
conn, comes to Phoenix via Elixir’s Plug middleware framework. More detailed info about
conn can be found in plug’s documentation.
The second parameter is
params. Not surprisingly, this is a map which holds any parameters passed along in the HTTP request. It is a good practice to pattern match against params in the function signature to provide data in a simple package we can pass on to rendering. We saw this in the Adding Pages guide when we added a messenger parameter to our
show route in
lib/hello_web/controllers/hello_controller.ex.
defmodule HelloWeb.HelloController do . . . def show(conn, %{"messenger" => messenger}) do render conn, "show.html", messenger: messenger end end
In some cases - often in
index actions, for instance - we don’t care about parameters because our behavior doesn’t depend on them. In those cases, we don’t use the incoming params, and simply prepend the variable name with an underscore,
_params. This will keep the compiler from complaining about the unused variable while still keeping the correct arity.
Gathering Data
While Phoenix does not ship with its own data access layer, the Elixir project Ecto provides a very nice solution for those using the Postgres relational database. We cover how to use Ecto in a Phoenix project in the Ecto Guide. Databases supported by Ecto are covered in the Usage section of the Ecto README.
Of course, there are many other data access options. Ets and Dets are key value data stores built into OTP. OTP also provides a relational database called mnesia with its own query language called QLC. Both Elixir and Erlang also have a number of libraries for working with a wide range of popular data stores.
The data world is your oyster, but we won’t be covering these options in these guides.
Flash Messages
There are times when we need to communicate with users during the course of an action. Maybe there was an error updating a schema. Maybe we just want to welcome them back to the application. For this, we have flash messages.
The
Phoenix.Controller module provides the
put_flash/3 and
get_flash/2 functions to help us set and retrieve flash messages as a key value pair. Let’s set two flash messages in our
HelloWeb.PageController to try this out.
To do this we modify the
index action as follows:
defmodule HelloWeb.PageController do . . . def index(conn, _params) do conn |> put_flash(:info, "Welcome to Phoenix, from flash info!") |> put_flash(:error, "Let's pretend we have an error.") |> render("index.html") end end
The
Phoenix.Controller module is not particular about the keys we use. As long as we are internally consistent, all will be well.
:info and
:error, however, are common.
In order to see our flash messages, we need to be able to retrieve them and display them in a template/layout. One way to do the first part is with
get_flash/2 which takes
conn and the key we care about. It then returns the value for that key.
Fortunately, our application layout,
lib/hello_web/templates/layout/app.html.eex, already has markup for displaying flash messages.
<p class="alert alert-info" role="alert"><%= get_flash(@conn, :info) %></p> <p class="alert alert-danger" role="alert"><%= get_flash(@conn, :error) %></p>
When we reload the Welcome Page, our messages should appear just above “Welcome to Phoenix!”
Besides
put_flash/3 and
get_flash/2, the
Phoenix.Controller module has another useful function worth knowing about.
clear_flash/1 takes only
conn and removes any flash messages which might be stored in the session.
Rendering
Controllers have several ways of rendering content. The simplest is to render some plain text using the
text/2 function which Phoenix provides.
Let’s say we have a
show action which receives an id from the params map, and all we want to do is return some text with the id. For that, we could do the following.
def show(conn, %{"id" => id}) do text conn, "Showing id #{id}" end
Assuming we had a route for
get "/our_path/:id" mapped to this
show action, going to
/our_path/15 in your browser should display
Showing id 15 as plain text without any HTML.
A step beyond this is rendering pure JSON with the
json/2 function. We need to pass it something that the Jason library can decode into JSON, such as a map. (Jason is one of Phoenix’s dependencies.)
def show(conn, %{"id" => id}) do json conn, %{id: id} end
If we again visit
our_path/15 in the browser, we should see a block of JSON with the key
id mapped to the number
15.
{"id": "15"}
Phoenix controllers can also render HTML without a template. As you may have already guessed, the
html/2 function does just that. This time, we implement the
show action like this.
def show(conn, %{"id" => id}) do html conn, """ <html> <head> <title>Passing an Id</title> </head> <body> <p>You sent in id #{id}</p> </body> </html> """ end
Hitting
/our_path/15 now renders the HTML string we defined in the
show action, with the value
15 interpolated. Note that what we wrote in the action is not an
eex template. It’s a multi-line string, so we interpolate the
id variable like this
#{id} instead of this
<%= id %>.
It is worth noting that the
text/2,
json/2, and
html/2 functions require neither a Phoenix view, nor a template to render.
The
json/2 function is obviously useful for writing APIs, and the other two may come in handy, but rendering a template into a layout with values we pass in is a very common case.
For this, Phoenix provides the
render/3 function.
Interestingly,
render/3 is defined in the
Phoenix.View module instead of
Phoenix.Controller, but it is aliased in
Phoenix.Controller for convenience.
We have already seen the render function in the Adding Pages Guide. Our
show action in
lib/hello_web/controllers/hello_controller.ex looked like this.
defmodule HelloWeb.HelloController do use HelloWeb, :controller def show(conn, %{"messenger" => messenger}) do render conn, "show.html", messenger: messenger end end
In order for the
render/3 function to work correctly, the controller must have the same root name as the individual view. The individual view must also have the same root name as the template directory where the
show.html.eex template lives. In other words, the
HelloController requires
HelloView, and
HelloView requires the existence of the
lib/hello_web/templates/hello directory, which must contain the
show.html.eex template.
render/3 will also pass the value which the
show action received for
messenger from the params hash into the template for interpolation.
If we need to pass values into the template when using
render, that’s easy. We can pass a dictionary like we’ve seen with
messenger: messenger, or we can use
Plug.Conn.assign/3, which conveniently returns
conn.
def index(conn, _params) do conn |> assign(:message, "Welcome Back!") |> render("index.html") end
Note: The
Phoenix.Controller module imports
Plug.Conn, so shortening the call to
assign/3 works just fine.
We can access this message in our
index.html.eex template, or in our layout, with this
Passing more than one value in to our template is as simple as connecting
assign/3 functions together in a pipeline.
def index(conn, _params) do conn |> assign(:message, "Welcome Back!") |> assign(:name, "Dweezil") |> render("index.html") end
With this, both
@message and
@name will be available in the
index.html.eex template.
What if we want to have a default welcome message that some actions can override? That’s easy, we just use
plug and transform
conn on its way towards the controller action.
plug :assign_welcome_message, "Welcome Back" def index(conn, _params) do conn |> assign(:message, "Welcome Forward") |> render("index.html") end defp assign_welcome_message(conn, msg) do assign(conn, :message, msg) end
What if we want to plug
assign_welcome_message, but only for some of our actions? Phoenix offers a solution to this by letting us specify which actions a plug should be applied to. If we only wanted
plug :assign_welcome_message to work on the
index and
show actions, we could do this.
defmodule HelloWeb.PageController do use HelloWeb, :controller plug :assign_welcome_message, "Hi!" when action in [:index, :show] . . .
Sending responses directly
If none of the rendering options above quite fits our needs, we can compose our own using some of the functions that Plug gives us. Let’s say we want to send a response with a status of “201” and no body whatsoever. We can easily do that with the
send_resp/3 function.
def index(conn, _params) do conn |> send_resp(201, "") end
Reloading should show us a completely blank page. The network tab of our browser’s developer tools should show a response status of “201”.
If we would like to be really specific about the content type, we can use
put_resp_content_type/2 in conjunction with
send_resp/3.
def index(conn, _params) do conn |> put_resp_content_type("text/plain") |> send_resp(201, "") end
Using Plug functions in this way, we can craft just the response we need.
Rendering does not end with the template, though. By default, the results of the template render will be inserted into a layout, which will also be rendered.
Templates and layouts have their own guide, so we won’t spend much time on them here. What we will look at is how to assign a different layout, or none at all, from inside a controller action.
Assigning Layouts
Layouts are just a special subset of templates. They live in
lib/hello_web/templates/layout. Phoenix created one for us when we generated our app. It’s called
app.html.eex, and it is the layout into which all templates will be rendered by default.
Since layouts are really just templates, they need a view to render them. This is the
LayoutView module defined in
lib/hello_web/views/layout_view.ex. Since Phoenix generated this view for us, we won’t have to create a new one as long as we put the layouts we want to render inside the
lib/hello_web/templates/layout directory.
Before we create a new layout, though, let’s do the simplest possible thing and render a template with no layout at all.
The
Phoenix.Controller module provides the
put_layout/2 function for us to switch layouts. This takes
conn as its first argument and a string for the basename of the layout we want to render. Another clause of the function will match on the boolean
false for the second argument, and that’s how we will render the Phoenix welcome page without a layout.
In a freshly generated Phoenix app, edit the
index action of the
PageController module
lib/hello_web/controllers/page_controller.ex to look like this.
def index(conn, _params) do conn |> put_layout(false) |> render("index.html") end
After reloading, we should see a very different page, one with no title, logo image, or css styling at all.
Very Important! For function calls in a pipeline, it is critical to use parentheses around the arguments because the pipe operator binds very tightly. This leads to parsing problems and very strange results.
If you ever get a stack trace that looks like this,
**(FunctionClauseError) no function clause matching in Plug.Conn.get_resp_header/2 Stacktrace (plug) lib/plug/conn.ex:353: Plug.Conn.get_resp_header(false, "content-type")
where your argument replaces
conn as the first argument, one of the first things to check is whether there are parentheses in the right places.
This is fine.
def index(conn, _params) do conn |> put_layout(false) |> render("index.html") end
Whereas this won’t work.
def index(conn, _params) do conn |> put_layout false |> render "index.html" end
Now let’s actually create another layout and render the index template into it. As an example, let’s say we had a different layout for the admin section of our application which didn’t have the logo image. To do this, let’s copy the existing
app.html.eex to a new file
admin.html.eex in the same directory
lib/hello_web/templates/layout. Then let’s remove the line in
admin.html.eex that displays the logo.
<span class="logo"></span> <!-- remove this line -->
Then, pass the basename of the new layout into
put_layout/2 in our
index action in
lib/hello_web/controllers/page_controller.ex.
def index(conn, _params) do conn |> put_layout("admin.html") |> render("index.html") end
When we load the page, we should be rendering the admin layout without a logo.
Overriding Rendering Formats
Rendering HTML through a template is fine, but what if we need to change the rendering format on the fly? Let’s say that sometimes we need HTML, sometimes we need plain text, and sometimes we need JSON. Then what?
Phoenix allows us to change formats on the fly with the
_format query string parameter. To make this happen, Phoenix requires an appropriately named view and an appropriately named template in the correct directory.
As an example, let’s take the
PageController index action from a newly generated app. Out of the box, this has the right view,
PageView, the right templates directory,
lib/hello_web/templates/page, and the right template for rendering HTML,
index.html.eex.
def index(conn, _params) do render conn, "index.html" end
What it doesn’t have is an alternative template for rendering text. Let’s add one at
lib/hello_web/templates/page/index.text.eex. Here is our example
index.text.eex template.
OMG, this is actually some text.
There are just a few more things we need to do to make this work. We need to tell our router that it should accept the
text format. We do that by adding
text to the list of accepted formats in the
:browser pipeline. Let’s open up
lib/hello_web/router.ex and change the
plug :accepts to include
text as well as
html like this.
defmodule HelloWeb.Router do use HelloWeb, :router pipeline :browser do plug :accepts, ["html", "text"] plug :fetch_session plug :protect_from_forgery plug :put_secure_browser_headers end . . .
We also need to tell the controller to render a template with the same format as the one returned by
Phoenix.Controller.get_format/1. We do that by substituting the atom version of the template
:index for the string version
"index.html".
def index(conn, _params) do render conn, :index end
If we go to, we will see
OMG, this is actually some text.
Of course, we can pass data into our template as well. Let’s change our action to take in a message parameter by removing the
_ in front of
params in the function definition. This time, we’ll use the somewhat less-flexible string version of our text template, just to see that it works as well.
def index(conn, params) do render conn, "index.text", message: params["message"] end
And let’s add a bit to our text template.
OMG, this is actually some text. <%= @message %>
Now if we go to, we will see “OMG, this is actually some text. CrazyTown”
Setting the Content Type
Analogous to the
_format query string param, we can render any sort of format we want by modifying the HTTP Content-Type Header and providing the appropriate template.
If we wanted to render an xml version of our
index action, we might implement the action like this in
lib/hello_web/page_controller.ex.
def index(conn, _params) do conn |> put_resp_content_type("text/xml") |> render("index.xml", content: some_xml_content) end
We would then need to provide an
index.xml.eex template which created valid xml, and we would be done.
For a list of valid content mime-types, please see the mime.types documentation from the mime type library.
Setting the HTTP Status
We can also set the HTTP status code of a response similarly to the way we set the content type. The
Plug.Conn module, imported into all controllers, has a
put_status/2 function to do this.
put_status/2 takes
conn as the first parameter and as the second parameter either an integer or a “friendly name” used as an atom for the status code we want to set. Here is the list of supported friendly names. Please note that the rule to convert a “friendly name” to an atom follows this rule. For example,
I'm a teapot becomes
:im_a_teapot.
Let’s change the status in our
PageController
index action.
def index(conn, _params) do conn |> put_status(202) |> render("index.html") end
The status code we provide must be valid - Cowboy, the web server Phoenix runs on, will throw an error on invalid codes. If we look at our development logs (which is to say, the iex session), or use our browser’s web inspection network tool, we will see the status code being set as we reload the page.
If the action sends a response - either renders or redirects - changing the code will not change the behavior of the response. If, for example, we set the status to 404 or 500 and then
render "index.html", we do not get an error page. Similarly, no 300 level code will actually redirect. (It wouldn’t know where to redirect to, even if the code did affect behavior.)
The following implementation of the
HelloWeb.PageController
index action, for example, will not render the default
not_found behavior as expected.
def index(conn, _params) do conn |> put_status(:not_found) |> render("index.html") end
The correct way to render the 404 page from
HelloWeb.PageController is:
def index(conn, _params) do conn |> put_status(:not_found) |> put_view(HelloWeb.ErrorView) |> render("404.html") end
Redirection
Often, we need to redirect to a new url in the middle of a request. A successful
create action, for instance, will usually redirect to the
show action for the schema we just created. Alternately, it could redirect to the
index action to show all the things of that same type. There are plenty of other cases where redirection is useful as well.
Whatever the circumstance, Phoenix controllers provide the handy
redirect/2 function to make redirection easy. Phoenix differentiates between redirecting to a path within the application and redirecting to a url - either within our application or external to it.
In order to try out
redirect/2, let’s create a new route in
lib/hello_web/router.ex.
defmodule HelloWeb.Router do use HelloWeb, :router . . . scope "/", HelloWeb do . . . get "/", PageController, :index end # New route for redirects scope "/", HelloWeb do get "/redirect_test", PageController, :redirect_test, as: :redirect_test end . . . end
Then we’ll change the
index action to do nothing but redirect to our new route.
def index(conn, _params) do redirect conn, to: "/redirect_test" end
Finally, let’s define in the same file the action we redirect to, which simply renders the text
Redirect!.
def redirect_test(conn, _params) do text conn, "Redirect!" end
When we reload our Welcome Page, we see that we’ve been redirected to
/redirect_test which has rendered the text
Redirect!. It works!
If we care to, we can open up our developer tools, click on the network tab, and visit our root route again. We see two main requests for this page - a get to
/ with a status of
302, and a get to
/redirect_test with a status of
200.
Notice that the redirect function takes
conn as well as a string representing a relative path within our application. It can also take
conn and a string representing a fully-qualified url.
def index(conn, _params) do redirect conn, external: "" end
We can also make use of the path helpers we learned about in the Routing Guide.
defmodule HelloWeb.PageController do use HelloWeb, :controller def index(conn, _params) do redirect conn, to: redirect_test_path(conn, :redirect_test) end end
Note that we can’t use the url helper here because
redirect/2 using the atom
:to, expects a path. For example, the following will fail.
def index(conn, _params) do redirect conn, to: redirect_test_url(conn, :redirect_test) end
If we want to use the url helper to pass a full url to
redirect/2, we must use the atom
:external. Note that the url does not have to be truly external to our application to use
:external, as we see in this example.
def index(conn, _params) do redirect conn, external: redirect_test_url(conn, :redirect_test) end
Action Fallback
Action Fallback allows us to centralize error handling code in plugs which are called when a controller action fails to return a
Plug.Conn.t. These plugs receive both the conn which was originally passed to the controller action along with the return value of the action.
Let’s say we have a
show action which uses
with to fetch a blog post and then authorize the current user to view that blog post. In this example we might expect
Blog.fetch_post/1 to return
{:error, :not_found} if the post is not found and
Authorizer.authorize/3 might return
{:error, :unauthorized} if the user is unauthorized. We could render the error views for these non-happy-paths directly.
defmodule HelloWeb.MyController do use Phoenix.Controller alias Hello.{Authorizer, Blog} alias HelloWeb.ErrorView def show(conn, %{"id" => id}, current_user) do with {:ok, post} <- Blog.fetch_post(id), :ok <- Authorizer.authorize(current_user, :view, post) do render(conn, "show.json", post: post) else {:error, :not_found} -> conn |> put_status(:not_found) |> put_view(ErrorView) |> render(:"404") {:error, :unauthorized} -> conn |> put_status(403) |> put_view(ErrorView) |> render(:"403") end end end
Many times - especially when implementing controllers for an API - error handling in the controllers like this results in a lot of repetition. Instead we can define a plug which knows how to handle these error cases.
defmodule HelloWeb.MyFallbackController do use Phoenix.Controller alias HelloWeb.ErrorView def call(conn, {:error, :not_found}) do conn |> put_status(:not_found) |> put_view(ErrorView) |> render(:"404") end def call(conn, {:error, :unauthorized}) do conn |> put_status(403) |> put_view(ErrorView) |> render(:"403") end end
Then we can reference that plug using action_fallback and simply remove the
else block from our
with. Our plug will receive the original conn as well as the result of the action and respond appropriately.
defmodule HelloWeb.MyController do use Phoenix.Controller alias Hello.{Authorizer, Blog} action_fallback HelloWeb.MyFallbackController def show(conn, %{"id" => id}, current_user) do with {:ok, post} <- Blog.fetch_post(id), :ok <- Authorizer.authorize(current_user, :view, post) do render(conn, "show.json", post: post) end end end
Halting the Plug Pipeline
As we mentioned - Controllers are plugs…. specifically plugs which are called toward the end of the plug pipeline. At any step of the pipeline we might have cause to stop processing - typically because we’ve redirected or rendered a response.
Plug.Conn.t has a
:halted key - setting it to true will cause downstream plugs to be skipped. We can do that easily using
Plug.Conn.halt/1.
Consider a
HelloWeb.PostFinder plug. On call, if we find a post related to a given id then we add it to
conn.assigns; and if we don’t find the post we respond with a 404 page.
defmodule HelloWeb.PostFinder do use Plug import Plug.Conn alias Hello.Blog def init(opts), do: opts def call(conn, _) do case Blog.get_post(conn.params["id"]) do {:ok, post} -> assign(conn, :post, post) {:error, :notfound} -> conn |> send_resp(404, "Not found") end end end
If we call this plug as part of the plug pipeline any downstream plugs will still be processed. If we want to prevent downstream plugs from being processed in the event of the 404 response we can simply call
Plug.Conn.halt/1.
case Blog.get_post(conn.params["id"]) do {:ok, post} -> assign(conn, :post, post) {:error, :notfound} -> conn |> send_resp(404, "Not found") |> halt() end
It’s important to note that
halt/1 simply sets the
:halted key on
Plug.Conn.t to
true. This is enough to prevent downstream plugs from being invoked but it will not stop the execution of code locally. As such
conn |> send_resp(404, "Not found") |> halt()
… is functionally equivalent to…
conn |> halt() |> send_resp(404, "Not found")
It’s also important to note that halting will only stop the plug pipeline from continuing. Function plugs will still execute unless their implementation checks for the
:halt value.
def post_authorization_plug(%{halted: true} = conn, _), do: conn def post_authorization_plug(conn, _) do . . . end | https://hexdocs.pm/phoenix/controllers.html | CC-MAIN-2018-43 | refinedweb | 4,418 | 65.42 |
Kubernetes best practices. Correct Terminate Disable
Kubernetes best practices. Kubernetes organization with namespace
Kubernetes best practices. Kubernetes viability test with Readiness and Liveness tests
Kubernetes best practices. Setting Queries and Resource Limits
An important point in the work of distributed systems is the processing of failures. Kubernetes helps with this by using controllers that monitor the state of your system and restart services that have stopped working. However, Kubernetes can forcibly shut down your applications to ensure overall system viability. In this series, we will look at how you can help Kubernetes do its job more efficiently and reduce application downtime.
Prior to using containers, most applications ran on virtual or physical machines. If the application crashed or crashed, it took a long time to remove the ongoing task and re-download the program. In the worst case, someone had to solve this problem manually at night, at the most inopportune time. If only 1-2 working machines performed an important task, such a malfunction was completely unacceptable.
Therefore, instead of manually restarting, they began to use monitoring at the process level to automatically restart the application in the event of its emergency termination. If the program crashes, the monitoring process captures the exit code and reboots the server. With the advent of systems such as Kubernetes, this type of system failure response has simply been integrated into the infrastructure.
Kubernetes uses the “observe — commit differences — commit” event loop to ensure that resources are operational along the way from the containers to the nodes themselves.
This means that you no longer need to manually start process monitoring. If a resource fails the Health Check, Kubernetes will simply automatically provide a replacement. Kubernetes does more than just monitor your application crashes. It can create more copies of the application to work on multiple machines, update the application, or simultaneously run multiple versions of your application.
Therefore, there are many reasons why Kubernetes can interrupt a perfectly healthy container. For example, if you upgrade your deployment, Kubernetes will slowly stop old pods while launching new ones. If you disconnect a node, Kubernetes will terminate all hearths in that node. Finally, if the node runs out of resources, Kubernetes will disable all pods to free these resources.
Therefore, it is very important that your application stops working with minimal impact on the end user and minimum recovery time. This means that before disconnecting it must save all the data that needs to be saved, close all network connections, complete the remaining work and have time to complete other urgent tasks.
In practice, this means that your application should be able to process the SIGTERM message – the process termination signal, which is the default signal for the kill utility in Unix family OS. After receiving this message, the application should disconnect.
After Kubernetes decided to complete the pod, a whole series of events took place. Let’s look at every step that Kubernetes takes when a container or hearth completes.
Suppose we want to complete one of the hearths. At this point, it will stop receiving new traffic – containers working in the hearth will not be affected, but all new traffic will be blocked.
Let’s look at the preStop hook – this is a special command or HTTP request sent to containers in the hearth. If your application does not turn off correctly when SIGTERM is received, you can use preStop to exit correctly.
Most programs when they receive a SIGTERM signal finish correctly, but if you use third-party code or some system that you cannot fully control, the preStop hook is a great way to cause a graceful shutdown without changing the application.
After executing this hook, Kubernetes will send a SIGTERM signal to the containers in the hearth, which will let them know that they will be disconnected soon. Having received this signal, your code will proceed to the shutdown process. This process may include stopping any long-lived connections, such as connecting to a database or a WebSocket stream, saving the current state, and the like.
Even if you use the preStop hook, it is very important to check what exactly happens with your application when you send it a SIGTERM signal, how it behaves in such a way that events or changes in the system’s operation caused by the hearth shutdown are not a surprise to you.
At this point, before taking further action, Kubernetes will wait for a specified time, called terminationGracePeriodSecond, or the period for it to shut down correctly when it receives a SIGTERM signal.
By default, this period is 30 seconds. It is important to note that it lasts in parallel with the preStop hook and the SIGTERM signal. Kubernetes won’t wait for the preStop hook and SIGTERM to end — if your application exits before the TerminationGracePeriod expires, Kubernetes will proceed immediately to the next step. Therefore, check that the value of this period in seconds is not less than the time required for the hearth to turn off correctly, and if it exceeds 30 s, increase the period to the desired value in YAML. In the above example, it is 60s.
And finally, the last step – if the containers still continue to work after the terminationGracePeriod expires, they will send a SIGKILL signal and will be forcibly deleted. At this point, Kubernetes will also clean out all other pod objects.
Kubernetes shuts down hearths for many reasons, so make sure that in any case your application will be completed correctly to ensure stable operation of the service.
To be continued very soon …
A bit of advertising 🙂
Thank you for staying with us. Do you like our articles? Want to see more interesting materials? Support us by placing an order or recommending to your friends, cloud about How to Build Infrastructure Bldg. class c using Dell R730xd E5-2650 v4 servers costing 9,000 euros for a penny? | https://prog.world/kubernetes-best-practices-correct-terminate-disable/ | CC-MAIN-2022-33 | refinedweb | 985 | 52.8 |
Event structure for BLE_GAP_EVT_ADV_REPORT. More...
#include <ble_gap.h>
Event structure for BLE_GAP_EVT_ADV_REPORT.
Advertising or scan response data.
Set when the scanner is unable to resolve the private resolvable address of the initiator
field of a directed advertisement packet and the scanner has been enabled to report this in ble_gap_scan_params_t::adv_dir_report.
Advertising or scan response data length.
Bluetooth address of the peer device. If the peer_addr resolved: @ref ble_gap_addr_t::addr_id_peer is set to 1
and the address is the device's identity address.
Received Signal Strength Indication in dBm.
If 1, the report corresponds to a scan response and the type field may be ignored.
See GAP Advertising types. Only valid if the scan_rsp field is 0. | http://infocenter.nordicsemi.com/topic/com.nordic.infocenter.s132.api.v5.0.0/structble__gap__evt__adv__report__t.html | CC-MAIN-2018-22 | refinedweb | 116 | 52.66 |
Build Machine Learning prototypes web applications lightning fast.
Project description
Overview
Open source, Python-based tool to build prototypes lightning fast ⚡
- Website:
- Documentation:
- Source code:
- Installation:
pip install fast-dash
Fast Dash is a Python module that makes the development of web applications fast and easy. It is built on top of Plotly Dash and can be used to build web interfaces for Machine Learning models or to showcase any proof of concept without the hassle of developing UI from scratch.
Simple example
With Fast Dash's decorator
fastdash, it's a breeze to deploy any Python function as a web app. Here's how to use it to write your first Fast Dash app:
from fast_dash import fastdash @fastdash def text_to_text_function(input_text): return input_text # * Running on (Press CTRL+C to quit)
And just like that (🪄), we have a completely functional interactive app!
Output:
Fast Dash can read additional details about a function, like its name, input and output types, docstring, and uses this information to infer which components to use.
For example, here's how to deploy an app that takes a string and an integer as inputs and return some text.
from fast_dash import fastdash @fastdash def display_selected_text_and_number(text: str, number: int) -> str: "Simply display the selected text and number" processed_text = f'Selected text is {text} and the number is {number}.' return processed_text # * Running on (Press CTRL+C to quit)
Output:
And with just a few more lines, we can add a title icon, subheader and other social branding details.
from fast_dash import fastdash @fastdash(title_image_path='', github_url='', linkedin_url='', twitter_url='') def display_selected_text_and_number(text: str, number: int) -> str: "Simply display the selected text and number" processed_text = f'Selected text is {text} and the number is {number}.' return processed_text
Output:
Read different ways to build Fast Dash apps and additional details by navigating to the project documentation.
Key features
- Launch an app only by specifying the types of inputs and outputs.
- Use multiple input and output components simultaneously.
- Flask-based backend allows easy scalability and customizability.
- Build fast, share and iterate.
Some features are coming up in future releases:
- More input and output components.
- Deploy to Heroku and Google Cloud.
- and many more.
Community
Fast Dash is built on open-source. You are encouraged to share your own projects, which will be highlighted on a common community gallery (coming up).
Credits
Fast Dash is built using Plotly Dash. Dash's Flask-based backend enables Fast Dash apps to scale easily and makes them highly compatibility with other integration services. This project is partially inspired from gradio.
The project template was created with Cookiecutter and zillionare/cookiecutter-pypackage.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/fast-dash/ | CC-MAIN-2022-40 | refinedweb | 465 | 54.12 |
TOTD #42: Hello JavaServer Faces World with NetBeans and GlassFish
By arungupta on Aug 19, 2008
This TOTD (Tip Of The Day) shows how to create a simple Java Server Faces application using NetBeans IDE 6.1. This is my first ever Java Server Faces application :) Much more comprehensive applications are already available in NetBeans and GlassFish tutorials.:
This particular TOTD is using JSF 1.2 that is already bundled with GlassFish
v2. Let's get started.:
Subsequent entries on this trail will show how Java Server
Faces Technology Extensions, Facelets, Mojarra make
the application richer.
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.
Technorati: totd mysql javaserverfaces netbeans glassfish:
- How to create a JSF application using NetBeans IDE ?
- How to populate a JSF widget with a Managed Bean ?
- How to use a Persistence Unit with JSF widgets ?
- How to setup navigation rules between multiple pages ?
- How to print simple error validation messages ?
- How to inject a bean into another class ?
- In NetBeans IDE, create a new project
- Create a new NetBeans Web project and enter the values ("Cities") as shown:
and click on "Next".
- Choose GlassFish v2 as the deployment server and click on "Next".
- Select "JavaServer Faces" framework as shown below:
take defaults and click on "Finish".
- Create a Persistence Unit as explained in TOTD #38. The values required for this TOTD are slightly different and given below.
- Use the following table definition:
- There is no need to populate the table.
- Use "jndi/cities" as Data Source name.
- There is no need to create a Servlet.
- Add the following NamedQuery:
right after the highlighted parentheses shown below:
- Create a new bean which will perform all the database operations
- Right-click on "Source Packages", select "New", "Java Class..." and specify the values as shown below:
and click on "Finish".
- Create a new class instance variable for "Cities" entity class by adding a new variable and accessor methods as shown below:
and then injecting in "faces-config.xml" as shown by the fragment below:
- In "server.DatabaseUtil"
- Inject EntityManager and UserTransaction as shown:
- Add a method that returns a Collection of all entries in the database table as shown below:
- Add a method that will save a new entry in the database by using values from the injected "Cities" entity class as shown below:
- Finally, right-click in the editor pane and select "Fix Imports":
and click on "OK". Make sure to pick the right package name for "NotSupportedException" and "RollbackException".
- Add Java Server Faces widgets in the main entry page
- In "welcomeJSF.jsp", drag/drop "JSF Form" widget on line 22 as shown below:
- Select "Form Generated from Entity Class" and specify "server.Cities" entity class in the text box as shown:
- The generated code fragment looks like:
It generates a 2-column table based upon fields from the entity class. We will use this form for accepting inputs by making the following changes:
- Remove first two "h:outputText" entries because "id" is auto generated.
- Change "h:outputText" that uses value expression to "h:inputText" to accept the input.
- Use "cities" managed bean instead of the default generated expression.
- Add required="true" to inputText fields. This will ensure that the form can not be submitted if text fields are empty.
- Add "id" attributes to inputText fields. This will be used to display the error message if fields are empty.
- Add a button to submit the results:
This must be added between </h:panelGrid> and </h:form> tags.
- Add a placeholder for displaying error messages:
right after <h:commandButton> tag. The official docs specify the default value of "false" for both "showSummary" and "showDetail" attribute. But TLD says "false" for "showSummary" and "true" for "showDetail". Issue# 773 will fix that.
- Add a new page that displays result of all the entries added so far
- Right-click on the main project, select "New", "JSP..." and specify the name as "result".
- Add the following namespace declarations at top of the page:
- Drag/Drop a "JSF Data Table" widget in the main HTML body and enter the values as shown:
The generated code fragment looks like:
Change the <h:dataTable> tag as shown below (changes highlighted in bold):
- This page will be used to show the results after an entry is added to the database. Add a new button to go back to the entry page by adding the following fragment:
between </h:form> and </f:view> tags.
- Add the navigation rules to "faces-config.xml" as shown below:
The corresponding XML fragment is::
- JSF Tag Library & API docs
- javaserverfaces.dev.java.net - the community website
- Java EE 5 JSF Tutorial and many more on the community website right navbar.
- Java Server Faces on SDN
- GlassFish Webtier Aggregated Feed
- Feedback
Please leave suggestions on other TOTD (Tip Of The Day) that you'd like to see. A complete archive of all tips is available here.
Technorati: totd mysql javaserverfaces netbeans glassfish
Great job Arun! I'm so glad you are helping to do these kinds of things for JSF now.
Posted by Roger Kitain on August 19, 2008 at 11:25 PM PDT #
Thanks for TOTD. I got a problem when i tried this with JavaDB. Below is the stacktrace from GlassFish log
Local Exception Stack:
Exception [TOPLINK-4002] (Oracle TopLink Essentials - 2.0.1 (Build b04-fcs (04/11/2008))): oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: java.sql.SQLSyntaxErrorException: Attempt to modify an identity column 'ID'.
Error Code: -1
Call: INSERT INTO CITIES (ID, CITY_NAME, COUNTRY_NAME) VALUES (?, ?, ?)
bind => [null, Hyderabad, India]
Query: InsertObjectQuery(server.Cities[id=null]).internal.sessions.AbstractSession.executeCall(AbstractSession.java:690)
Posted by Madhu on August 20, 2008 at 02:35 PM PDT #
Madhu,
There might be more than one change required for running it with JavaDB. For example the table definition need to look like:
create table cities(id integer NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, INCREMENT BY 1),
city_name varchar(20),
country_name varchar(20),
PRIMARY KEY(id));
I got an error by executing the DDL given in the blog. Can you try with the table above ?
Posted by Arun Gupta on August 21, 2008 at 08:16 AM PDT #
Hi, I am using postgresql database and for sample work you need put :
@GeneratedValue(strategy=GenerationType.IDENTITY)
at Cities entity.
another point is that result.jsp file the correct me seen:
<h:dataTable instead <h:dataTable.
thanks
Posted by Alex on August 26, 2008 at 05:58 AM PDT #
Superb demonstration thanks.
Posted by lava kafle on August 26, 2008 at 11:50 AM PDT #
Thanks a lot Arun.
So, now I can able to code in JSF.
Posted by Kumar Gaurav on August 26, 2008 at 06:23 PM PDT #
I tried it is easy to able the code
Posted by msn adresleri on September 03, 2008 at 07:57 AM PDT #
Great work. Readers might want to check out the new enterprise tech tip for composite components in JSF 2.0
Ed
Posted by Ed Burns on September 09, 2008 at 02:50 AM PDT #
Posted by Arun Gupta's Blog on September 16, 2008 at 10:47 PM PDT #
i like to know how i can insert a String array
into a data table .
i.e.
On select a drop down in a table a new row should be created with that particular field filled with array values.
Posted by jaysonkn on October 13, 2008 at 04:29 PM PDT #
Posted by Arun Gupta's Blog on October 13, 2008 at 10:55 PM PDT #
Çok güzel bir yazı. Genel kültürün gelişmesi açısından çok değerli bilgiler içeriyor. Dil konusuna gelince; artık bilim dilinin İngilizce olduğunu kabullenmek lazım. Bilim değil de aslında teknoloji dili demeliyiz. Çağın gerisinde kalmamak için İngilizce’ye savaş açmak yerine onunla barışık yaşamalıyız. Dünyada en çok kullanılan dil (konuşma dili) Çincedir. Bunun sebebi de malum nüfus farkı. ikinci dil İspanyolca ve evet 3.dil Türkçedir
Posted by devbahis on October 26, 2008 at 08:31 AM PDT #...
Posted by siyaset on October 26, 2008 at 08:32 AM PDT #
thanks
Posted by iibf on October 26, 2008 at 08:33 AM PDT #
very nice works
Posted by malatya on October 26, 2008 at 08:35 AM PDT #
Sometimes, we live java application problems when we try to add them to our web sites. Some browsers can not read them correctly. w3 site's validator also says same errors.
If you make a new lesson about integration java scripts to web pages, i will be so happy. (classes e.g.)
Thanks for your lesson again.
Posted by sunucu on January 27, 2009 at 05:00 PM PST #
thanks for this kind of infos
Posted by kolbastı on February 04, 2009 at 01:53 AM PST #
thanks
Posted by Tabela on February 22, 2009 at 08:54 PM PST #
Java Programming... I love you!
Posted by Web Hosting on May 06, 2009 at 12:28 PM PDT #
thank you bro!...
Posted by beylikdüzü halı yıkama on July 02, 2009 at 11:20 PM PDT #
hello.. thank you for this article.
Posted by porno on July 14, 2009 at 03:25 PM PDT # | https://blogs.oracle.com/arungupta/entry/totd_42_hello_javaserver_faces | CC-MAIN-2015-32 | refinedweb | 1,539 | 64.71 |
I have to drop the lowest score, and i can't include it in the calculation of the average. for input validation cant accept negative numbers for test score.
#include <iostream> #include <iomanip> using namespace std; void arrSelectSort(float *, int); void showArrPtr(float *, int); void showAverage(float, int); int main() { float *scores, //To dynamically allocate an array total=0.0; //Accumulator float lowest; int numScores; //To hold the number of test scores //Get the number of test scores. cout << "How many test scores would you like to process? "; cin >> numScores; //Dynamically allocate an array large enough to hold that many //test scores scores = new float[numScores]; if(scores==NULL) return 0; //Get the test score for each test cout << "Enter the test scores below.\n"; for (int count = 0; count < numScores; count++) { cout << "Test score #" << ( count + 1 ) << ": "; cin >> scores[count]; //Validate the input. while (scores < 0) { cout << "Zero or negative numbers not accepted.\n"; cout << "Test Score #" << (count + 1) << ": "; cin >>scores[count]; } } //Calculate the total scores for (int count = 0; count < numScores; count++) { total += scores[count]; } //sort the elements of the array pointers arrSelectSort ( scores, numScores ); //Will display them in sorted order. cout << "The test scores in ascending order are: \n"; showArrPtr ( scores, numScores ); showAverage( total, numScores ); //Get lowest lowest = scores[0]; for ( int count = 1; count < numScores; count++) { if(scores[numScores] < lowest) lowest = scores[numScores]; } //Free memory. delete [] scores; return 0; } void arrSelectSort(float *array, int size) { int startScan, minIndex; floatArrPtr(float *array, int size) { for (int count=0; count< size; count++) cout << array[count] << " "; cout << endl; } void showAverage(float total, int numScores) { float average; //Calculate the average average = (total - lowest) / (numScores - 1); //Display the results. cout << fixed << showpoint << setprecision(2); cout << "When dropping lowest score the average is: " << average << endl; }
I don't really know how not to include it in the calculation, or if someone could give me a hint as to how to drop a test score without including it in calculation. there is nothing in this chapter or past ones that talks about doing something like this .but the way i did it it should work, execpt it tells me that lowest is an undiclared indentifier in the void showAverage part. Also when i remove this part my input validation wont work properly, actually it doesn't work at all. Why is that is no different from how i had written it before.
Edited by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/204940/drop-lowest-score | CC-MAIN-2017-09 | refinedweb | 403 | 57.4 |
I am studying in college and got stuck with an excercise in my book, should be simple but I insist of making life hard :p
Mission:
My code was sopoused to take a binary number with use of functions I have recently learned and mve all the digits to the right.
(the excercise actually said to take a number of 4 digits and didn;t mention wether to keep the 0 before the number or not)
Ex:
Like the number: 00101000 will become 00010100
problem:
The problem is I can't seem to find what;s wrong with the code but when I run it, it crushes (appearetnly it crushes after the first scan I tried to place a print code right after but it didn't print but crushed before) .
I didn't study yet what global parameters are, and I don;t know yet how to get the vector into the function.
And I dunno strings .
main idea:
I decided to store all numbers one digit at a time as a decimal in an integer.
I wanted to keep the the 0s before the first 1 and not make a number of 0010 into a 10...so I decided to count the 0s before the first digit of 1, and then my function will print the number of zeros +1 times the digit 0 (since I want to move all digits to the right , it adds another 0), then the function prints the number after the loop finishes printing the 0s.
my code:
#include <stdio.h> void bin(int num,int count) { for(int i=1;i<=(count+1);i++) printf("0"); printf("%d",(num/10)); } void main() { int num=0,count=0,a; printf("Please enter the bin number, 1 digit at a time,at the end\nenter a number different than 0 or 1\n"); do { scanf("%d",a); if ((a==0)&&(num==0)) count++; num=(num*10+a); } while ((a==0)||(a==1)); if(num==1) printf("the number can not b divided by 2\n"); if(num==0) printf("0/2=0\n"); if((num!=0)&&(num!=1)) bin(num,count); }
the parameters:
a= the digits each time a different one.
num= the 'bin like' decimal number.
counter= for counting the number of 0s before the number.
(I placed same names in the function for they represent same values).
the function is void since it;s for printinf number of 0s and then the number inside num.
I know I could turn it into decimal devide it by 2 and back into bin but I want the hard life :P....nah actually I already started this way so I insist of finding the bug :3
If you find anything please let me know~ | https://www.daniweb.com/programming/software-development/threads/250630/my-code-crushes-o-o-please-look-into-it | CC-MAIN-2017-22 | refinedweb | 459 | 71.38 |
Method::Cached - The return value of the method is cached to your storage
package Foo; use Method::Cached; sub cached :Cached { time . rand } sub no_cached { time . rand } package main; my $test1 = { cached => Foo->cached, no_cached => Foo->no_cached }; sleep 1; # It is preferable that time passes in this test my $test2 = { cached => Foo->cached, no_cached => Foo->no_cached }; is $test1->{cached}, $test2->{cached}; isnt $test1->{no_cached}, $test2->{no_cached};
Method::Cached offers the following mechanisms:
The return value of the method is stored in storage, and the value stored when being execute it next time is returned.
In beginning logic or the start-up script:
use Method::Cached::Manager -default => { class => 'Cache::FastMmap' }, -domains => { 'some-namespace' => { class => 'Cache::Memcached::Fast', args => [ ... ] }, }, ;
For more documentation on setting of cached domain, see Method::Cached::Manager.
This function is mounting used as an attribute of the method.
The cached rule is defined specifying the domain name.
sub message :Cached('some-namespace', 60 * 5, LIST) { ... }
When the domain name is omitted, the domain of default is used.
sub message :Cached(60 * 5, LIST) { ... }
use Method::Cached::KeyRule::Serialize;
Satoshi Ohkubo <s.ohkubo@gmail.com>
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~boxphere/Method-Cached-0.051/lib/Method/Cached.pm | CC-MAIN-2014-15 | refinedweb | 207 | 64.91 |
WiPy 2.0 + Blynk
So I'm new to python and the blynk system but figured it couldn't be that difficult. However I'm getting stuck at importing the library. The steps I took:
- Update Firmware: 1.7.2.b1
- Build the FTP connection with Pymakr plugin on the Atom IDE
- Update the boot.py to connect to home wifi network (suc. connection)
- Run the main.py code over the connection
The Code I want to run is from the example and tragically simple:
import BlynkLib
import time
BLYNK_AUTH = "my token"
blynk = BlynkLib.Blynk(BLYNK_AUTH)
def v3_write_handler(value):
print('Current slider value: {}'.format(value))
blynk.add_virtual_pin(3, write=v3_write_handler)
blynk.run()
However on bootup I get the following message:
Traceback (most recent call last):
File "main.py", line 2, in <module>
File "/flash/lib/BlynkLib.py", line 126, in <module>
File "/flash/lib/BlynkLib.py", line 127, in HwPin
AttributeError: type object 'Timer' has no attribute 'B'
When I try to run the program I get a different error:
AttributeError: 'module' object has no attribute 'Blynk'
Now I checked and the library does include the class Blynk and only shared import is "time".
I''m using Atom IDE pymakr plugin on MicroPython v1.8.6
If someone could help me solve this it would be awesome
@BitNide
I got it "working" ... at least 02_virtual_read.py.
In BlynkLib.py I changed _TimerMap = {}.
Rationale: Presumably the HwPin class is not used by virtual pins. Then it would not be surprising if they are not affected by the change. Not sure yet if I even care about non-virtual pins.
- crankshaft
This post is deleted!
YAY I FIXED IT
for anyone wanting to use the library and connect to blink.
in the MachineStub.py file remove line 203+
in the BlynkLib.py file change all the instances of:
time.sleep_ms -> MachineStub.sleep_ms
time.ticks_ms -> MachineStub.ticks_ms
time.ticks_diff -> MachineStub.ticks_diff
be sure to import both the MachineStub and machine module in the BlynkLib.py module
how would I go about migrating it?
Also I have been debugging a bit further and found the following code which seems to be at fault:
timeModule = sys.modules["time"]
setattr(timeModule, 'sleep_ms', sleep_ms)
setattr(timeModule, 'ticks_ms', ticks_ms)
setattr(timeModule, 'ticks_diff', ticks_diff)
I get the error : "keyError: time"
It seems to me like the time module does not accapt the additional attributes.
@BitNide
Code is for wipy1.0 not for wipy2.0
You must migrate it or wait for pycom for migrating (if it is somehow in progress)
could the problem be in the HwPin class using machine.Timer.B? resulting in import error? | https://forum.pycom.io/topic/1344/wipy-2-0-blynk | CC-MAIN-2018-09 | refinedweb | 437 | 59.5 |
After over two years of development from over 30 contributors, the jclouds team are happy to announce release of jclouds 1.0.0!
This release supports 30 cloud providers as well as cloud service software such as OpenStack, Eucalyptus, and vCloud. As we move forward we will use semantic versioning, so based on the version, you'll be able to tell whether it is a minor tweak vs new functionality.
jclouds 1.0.0 is available from maven central, and if you need help getting jars, please refer to our install guide. You can also follow any of our 10 examples, if you'd like a walk-through on how common features work!
There are a lot of new features in jclouds 1.0.0. Here are the highlights.
- Broader Support for providers including Ninefold Storage and OpenStack Nova
- AdminAccess.standard() which installs your login on compute clouds with zero config
- OSGi support including Karaf integration
- Enhanced BlobStore including public acl support, multipart uploads and BlobBuilder
- Revamped clojure bindings more notably compute2 and blobstore2 namespaces
- Queriable cloud provider metadata including friendly names, doc links, and locations
Find the team on irc freenode #jclouds and tell them what you like/don't like. If you're interested in participating in what's next, drop us a line on the google group.
If you'd like to read more into these features, check out our blog post! | http://www.theserverside.com/discussions/thread.tss?thread_id=62450 | CC-MAIN-2016-40 | refinedweb | 235 | 59.64 |
A Monoflop class for easier looping...
A Monoflop class for easier looping...
Join the DZone community and get the full member experience.Join For Free
Bring content to any platform with the open-source BloomReach CMS. Try for free.
List<String> listToJoin = ... boolean first = true; StringBuilder result = new StringBuilder(); for(String item : listToJoin) { if (!first) { result.append(", "); } first = false; result.append(item); } System.out.println(result);Yes, this works like a charm, but it also looks plain ugly. And don't ask how long you have to debug if you get the position of the first = false wrong.
A simple class called Monoflop can help here. A properly commented version can be found on GitHub, but a very short version will do here. Using this neat helper results in the following:
public class Monoflop { private boolean toggled = false; /** * Reads and returns the internal state. * Toggles it to true once. */ public boolean successiveCall() { if (toggled) { return true; } toggled = true; return false; } /** * Inverse of successiveCall */ public boolean firstCall() { return !successiveCall(); } }
And this...
List<String> listToJoin = ... Monoflop mf = new Monoflop(); StringBuilder result = new StringBuilder(); for(String item : listToJoin) { if (mf.successiveCall()) { result.append(", "); } result.append(item); } System.out.println(result);
Granted, you didn't save a whole lot of lines in this example. However, the logic is much more visible, and you have less possibilities for errors. The only thing that is a bit misleading is the call named successiveCall(), which has side-effects. On the other hadn, as it toggles the internal state, I didn't want to make it a getter ( isSuccessiveCall()) since that would be even more evil.
Feel free to use this class in your own code base (but use the one from GitHub - as it is better documented). However, if you like it and you have uses for the fastest dependency injection framework out there with lots of other features, check out: SIRIUS ( GitHub), which contains Monoflop. It is OpenSource (MIT license) and developed and maintained by scireum GmbH. }} | https://dzone.com/articles/monoflop-class-easier-looping | CC-MAIN-2018-39 | refinedweb | 331 | 66.44 |
Evolution-data-server provides the various backend components for the
Evolution integrated mail/PIM suite, including the Berkeley database
backend and the libical calendar components.
WWW:
make generate-plist
Dependency line: evolution-data-server>0:databases/evolution-data-server
To install the port: cd /usr/ports/databases/evolution-data-server/ && make install cleanTo add the package: pkg install evolution-data-server
cd /usr/ports/databases/evolution-data-server/ && make install clean
pkg install evolution-data-server
PKGNAME: evolution-data-server
There is no flavor information for this port.
distinfo:
TIMESTAMP = 1538599385
SHA256 (gnome3/evolution-data-server-3.28.5.tar.xz) = d95348d27207cde4ff3209d16c9336fd2a97d958f4c563450ccdf2f7c07e8788
SIZE (gnome3/evolution-data-server-3.28.5.tar.xz) = 4455136
NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
This port is required by:
===> The following configuration options are available for evolution-data-server-3.28.5_1:.6+,build sqlite tar:xz ssl
Number of commits found: 165 evolution suite to 3.28.5.
Obtained from: GNOME devel repo:
databases/evolution-data-server: unbreak with ICU 61
src/libedataserver/e-alphabet-index-private.cpp:79:2: error: unknown type name
'UnicodeString'; did you mean 'icu::UnicodeString'?
UnicodeString string;
^~~~~~~~~~~~~
icu::UnicodeString
/usr/local/include/unicode/unistr.h:286:20: note: 'icu::UnicodeString' declared
here
class U_COMMON_API UnicodeString : public Replaceable
^
src/libedataserver/e-alphabet-index-private.cpp:132:3: error: unknown type name
'UnicodeString'; did you mean 'icu::UnicodeString'?
UnicodeString ustring;
^~~~~~~~~~~~~
icu::UnicodeString
/usr/local/include/unicode/unistr.h:286:20: note: 'icu::UnicodeString' declared
here
class U_COMMON_API UnicodeString : public Replaceable
^
PR: 227042
Reported by: antoine (via exp-run)
Bump PORTREVISION on ports depending on devel/libical.
PR: 226460
databases/evolution-data-server: switch to C++11, required by ICU >= 59
In file included from e-alphabet-index-private.cpp:37:
In file included from /usr/local/include/unicode/alphaindex.h:18:
:
Add back USES=gperf to unbreak configure
Update evolution suite to 3.24.
* The build system switched to CMake, with ninja. Drop USES=gmake
* Remove systemd files, we have no need for them.
* Add/update WWW to new location
evolution-data-server:
* Remove double icu dependacy
* Make LDAP into a option, like mail/evolution
* Remove BDB warning message. This message was here if the user had a
nondefault bdb version selected. Due to that the eds only use bdb
version 5, the message can go.
evolution:
* Reenable MAPS option
* Add YTNEF option to support MS Outlook TNEF format
* Gstreamer is not used anymore
Update evolution-data-server to 3.22.7
Update evolution to 3.22.6
Update evolution-ews to 3.22.6
* all: record missing dependancies.
* eds: libsoup-gnome is not needed replace it with libsoup.
* eds: merge the compiler USES into the main USES line, it is always
needed. Remove the related text.
* evo: move the canberra-gtk3 dependancy to the canberra option
* evo-ews: drop google doc link in pkg-descr.
Obtained from: gnome devel repo
-), categories d, e, f, and g.
Update e-d-s and evolution-ews to 3.18.5 and evolution to 3.18.5.1.
Add patch to fix evolution crash when trying to open the preferences menu. [1]
PR: 207360 [1]
Reported by: lumiwa@gmail.com [1]
Obtained from: evolution upstream evolution suite to 3.16.5.
Make the camel-lock-helper-1.2 application part of the mail group and set sgid.
This allows evolution do do local mail delivery again.
PR: 202119
Submitted by: mwisnicki+freebsd@gmail
Update the Gnome stack to the latest in the 3.14 series.
Thanks to Gustau Perez <gustau.perez@gmail.com> for helping to keep thes
ports updated.
Obtained from: GNOME dev repo
Update evolution suite to 3.12.10.
Fix build of mail/evolution on 9.x and 8.x after webkit updates [1]
PR: 196079 [1]
196706 [1]
Submitted by: truckman@, lawrence chen <beastie@tardisi
We currently only support kerberos from base. Make sure it disabled and
not accidently picked up by configure.
PR: 194760
Submitted by: Neel Chauhan <neel@neelc.org>
Fix without gperf in base
Delete calls to g_thread_init. It isn't needed with glib 2.32 and up
and the port doesn't link with libgthread-2.0.
Reported by: anto PORTREVISION on all ports with USE_SQLITE=yes or USE_SQLITE=3 that
have not been bumped yet after the latest libsqlite3.so library version
change..
Catch up with libxml2 api breakage in 2.9.x
There is no _WITH/_WITHOUT support in bsd.options.mk. Use _ON/_OFF instead
for the kerberos option and fix typo in _OFF line.
Submitted by: John Hein <john.hein@microsemi.com>
Fix kerberos enable/disable flags.
PR: ports/189037 (based on)
Submitted by: barbara@
Switch to USES=libtool
Use options helpers
Use options sub.
Fix LIB_DEPENDS conversion
Reported by: truckman
In preparation for making libtool generate libraries with a sane name, fix all
LIB_DEPENDS in databases
- Chase security/libtasn1 update
- Add UPDATING entry
Add NO_STAGE all over the place in preparation for the staging support (cat:
databases)
Chase libtasn1 switching from USE_GNOME=pkgconfig to USES=pkgconfig
That has made pkgconf being a Build deps instead of Build+Run deps thus ports
depending on pkgconf need an explicit dependency
- Convert USE_GETTEXT to USES (part 4)
Approved by: portmgr (bapt)
Forgot to remove the plist part of the IMAP option (which was commented out
anyway) in the last commit.
Submitted by: pointyhat via miwi
Convert almost all gnome@ ports to OptionsNG, trim header, use USES=pathfix
instead of gnomehack and pet portlint.
Add conflicts with future gnome3 versions.
Reviewed by: miwi, libtasn1 update
- Bump PORTREVISIONs for dependant ports
>
Rev 1.13 did not fix the build with Heimdal 1.5.x because the
extra include files required by the updated <com_err.h> were
guarded by "#if HAVE_COM_ERR_H", which configure does not define
when compiling the test program. There was also an extra inclusion
of <com_err.h> inside the #if/#endif block. For some odd reason, a number
of the other include file tests in the configure script are written
with an extra #include, but it is harmless in those cases because the
#if conditions are not met.
Remove the #if test and the redundant include from the test program for
<com_err.h>. Make a similar fix to the test for <et/com_err.h>.
Reviewed by: mezz
- update png to 1.5.10
Fix the build with Heimdal 1.5.x.
PR: ports/167989
Reported by: truck heimdal support because it doesn't build.
PR: ports/154095
Submitted by: David Demelier <demelier.david@gmail.com>.
Bump PORTREVISION in a few more ports affected by the libgcrypt
upgrade (they have references to libgcrypt.so.16).
Presenting GNOME 2.30.2. for FreeBSD.
Bounce PORTREVISION for gettext-related ports. Have fun, ya'll.
Update libical to 0 the build when gtkdoc isn't installed.
Submitted by: QAT.
Chase libtasn1 shared library version bump.
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
12 vulnerabilities affecting 85 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities
Last updated:2019-02-15 15:06:58 | https://www.freshports.org/databases/evolution-data-server | CC-MAIN-2019-09 | refinedweb | 1,182 | 51.34 |
Hi:
- Every Form on the design surface of a Visual Studio
2005 DeviceApplication project has a property called FormFactor. Can
you locate it in System.Windows.Forms.dll using
Reflector or
ILDAsm or the
Reflection API?
- If the
ErrorProvider component is present on the design surface, all other
controls on the design surface have new property called Error on
errorProvider1. Can you locate this property in System.Windows.Forms.dll?.
Brief Architecture of TypeDescriptor.
- First stage: The CTD is obtained (as above)
from the latest provider and it returns the initial set of properties.
- Second stage: The input set is merged with the set of
properties contributed by the Extender Providers. To obtain this set,
GetExtendedTypeDescriptor is called on the latest provider for the instance.
- Third stage: If the component is sited and the site
proffers the
ITypeDescriptorFilterService, the input set is passed to the
ITypeDescriptorFilterService to be filtered. This forms the basis of the
IDesignerFilter that allows a
ComponentDesigner to add/change/delete properties from the set of
properties of the component it is designing.
- Fourth stage: The input set is filtered based on a set
of attributes. The rules used for this filtering is explained well in the
Remarks section
here..
Capabilities of the TypeDescriptor:
A simple illustration......
What a long post! should take a couple of readings to grok this 🙂
I had requested the Master (Brian Pepin, of course) for his expert comments. Here is what he has to say:
<From_BrianPe>
The post looks good; and everything I saw looks accurate. One thing that is worth mentioning: the call to GetTypeDescriptor gets called a lot and you will end up creating a great deal gen zero instances of your UselessCustomTypeDescriptor. There are two ways to fix this:
1. Implement your ICustomTypeDescriptor as a struct, not a class. This is what we do internally, but it means that you can’t derive from the default CustomTypeDescriptor implementation.
2. Cache your custom type descriptor for each object. This can be tricky because you need to implement a weak hash table (or else all the objects will stay in memory, which is far worse than the problem you were originally trying to solve.
<From_BrianPe>
So please note the point above about the call to GetTypeDescriptor.
BTW: Brian is the architect of all of .NET Designtime and currently the architect of Cidar. Refer to his homepage at
Can you show how to route a CF DLL’s metadata to its corresponding .asmmeta.dll? It is a very good example to demo TypeDescriptionProvider.
I have read all of the major players blogs and articles and all are great, but this one gives a rich understanding (not for the faint of heart) that most articles don’t bother to include. In short, great work!
I am considering using a generic implementation of custom type descriptors/custom property descriptor to help manage security in my business object layer. Primarily, I am interested in using the custom property descriptor and extend with properties that give me some flexibility of presentation in the UI. Also, using the property descriptor as a means of getting/setting information about security. Given that I could store the security metadata (per user) and at startup, attach the security metadata to the types in the custom type descriptor. Does this sound completely crazy? Have you encountered other solutions to providing dynamic security information about a business object to the presentation layer? I appreciate your work and community support.
–Eddie Frederick
eddie_frederick@teamhealth.com
How to make the type descriptor persistent.
how to store/reuse the created property through type descriptor.
how to use created property after closing and reopening the application?
Hi Partho,
Great article on this tricky subject…
Can I ask you a tricky question in relation to it ?
Do you know how to get to the implementing class of the current typedescriptor node for a type ?
I suspect this is difficult/impossible because
TypeDescriptor appears to use delegates to call the relevant methods on the active type descriptor; meaning the instance
returned from GetTypeDescriptor((typeof(MyType)) is not an instance of the CTD but some sort of wrapper around it.
Am I correct ? I can test this by implementing an interface on the CTD.
For example…
– My CTD implements an interface but when I ask for the currently active instance of the TypeDescriptor the result does not implement it but calling the methods on it works fine….
public class MyCTD : CustomTypeDescriptor, ISomeInterface
{
override public getproperties()
{
….
}
}
{
TypeDescriptorProvider provider = TypeDescritor.GetProvider(typeof(MyType));
ICustomTypeDescriptor ctd = provider.GetTypeDescriptor(typeof(MyType));
ISomeInterface si = (ctd as ISomeInterface); // always null……
ctd.GetProperties() // calls the correct method in my CTD
}
Any ideas ?
Thanks
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/parthopdas/2006/01/04/understanding-the-typedescriptor-a-metadata-engine-for-designtime-code/?replytocom=3313 | CC-MAIN-2018-05 | refinedweb | 789 | 56.86 |
[Solved] My program will only run on some computers
Hey!
I have developed a program in Qt. I have built an exe file for it and it has worked fine on several computers. Today when the program was tried on another computer it wouldnt start. Nothing came up. Not in the processor chart or anything. I have put all required *.dll files and other with the exe file on deployment.
What can be the problem? Could it be related to 32 bit vs 64 bit?
I dont think it is related to any libraries etc missing since it starts on most computers.
Please help
- koahnig Moderators
Your question is very open and does not give enough specifics.
Obviously you have an application for windows which will obviously run only on windows, but not on Linux or Mac.
If you have 64 bit application you cannot start on 32 bit installations. However, I would expect an error message.
Old computers may not have enough RAM and cannot load.
Probably you should check with depends that you really have all dlls.
Sometimes it helps to start application from cmd for getting error messages.
Im sorry for my vague description.
The computer I am trying to run my application on is 64 bit as well. Linux and Mac is not relevant for now. My program is 32 bit so it should work fine.
Also, it is a relatively new computer. Same as mine, and all the others I've tried out. So I guess thats not the issue.
Tried adding all *.dll files and folders from this tutorial:, and didnt remove any of them. Still didnt work.
Tried running the program from cmd on my computer and it worked perfectly.
Went to the other computer and did the same, no errors, but it wouldnt show/start.
Is my problem any clearer?
Hi,
If you are using Qt Quick, your target computer will need OpenGL 2.0 or higher unless you also used ANGLE.
Thanks! I used Qt Quick, but not ANGLE! Is it possible for me to just open my pro file (or similar) in ANGLE and rebuild it or something?
[quote author="skammers" date="1407238942"]Is it possible for me to just open my pro file (or similar) in ANGLE and rebuild it or something?[/quote]Sort of :) Just download a non-OpenGL version of Qt, and use that to build your project.
ANGLE is a library that converts OpenGL functions to DirectX functions. See here for details:
Thanks!
So I tried downloading this version : Qt 5.3.1 for Windows 32-bit (VS 2013, 559 MB) (Info).
Got this error:
:-1: error: Qt Creator needs a compiler set up to build. Configure a compiler in the kit options.
So far I understand I need to download MSVC2013? Which I cant find..
Yes, you need the compiler that matches your version of the libraries.
You can get Visual Studio 2013 Express from Microsoft's website.
Is it free?
Yes, the Express version is free.
Thanks! Now downloaded it and installed it. Weird thing is that when I try to build my program I get errors that some of the header files are missing. When I try to build the program like I used to I dont get these errors...Why is that? Sorry if I'm a slow learner, but you're very helpful now!
Thanks
What is the exact error message?
These four:
-c:\qt\sommerjobb - tomra\transed\Source\languagelist.h:3: error: C1083: Cannot open include file: 'langitmodel.h': No such file or directory
-C:\Qt\Sommerjobb - Tomra\TransEd\Source\languagelist.h:3: error: C1083: Cannot open include file: 'langitmodel.h': No such file or directory
-C:\Qt\Sommerjobb - Tomra\TransEd\Source\masterresource.cpp:12: error: C1083: Cannot open include file: 'JlCompress.h': No such file or directory
-C:\Qt\Sommerjobb - Tomra\TransEd\Source\languagelist.h:3: error: C1083: Cannot open include file: 'langitmodel.h': No such file or directory
I found the error: i did this
@#include "languagelist.h"
#include <langitmodel.h>@
instead of this:
@#include "languagelist.h"
#include "langitmodel.h"@ | https://forum.qt.io/topic/44499/solved-my-program-will-only-run-on-some-computers | CC-MAIN-2018-17 | refinedweb | 682 | 70.39 |
Legal
Ask a Lawyer and Get Answers to Your Legal Questions
Good evening! I can help you out with your legal question tonight. You unfortunately can only do so much. You can simply give him another copy of what you reported he made on the 1099. You can give him a copy of that and then he will need to check his tax return (which is where the error likely is).
You as the payor only have to show what you reported he made. He will have to solve the other problems from incorrectly entering the wrong amount.
He will likely be audited and have to provide all of his proof to substantiate his figures.
Did you have any other questions tonight?
The letter indicates the problem is on my reporting he sent in the correct 1099 does that make any difference my total payout to contractors on m tax return was not nearly that mych.
I don't believe it is on your reporting, otherwise you would be getting audited and not him. You can only show the IRS evidence of what you reported he made.
Do not give him your tax return, but simply the 1099 that you are required to give him.
Giving him any more information than that, would compromise your personal information.
the letter indicates he understated his income and that "income reported by Julius brown $100,098" should I visit the IRS in the Morning with him our 1099's agree.
Yes, make sure the IRS has a correct 1099 from you. That will help him tremendously.
Thanks so much for your help. | http://www.justanswer.com/law/7pe0t-independent-contractor-driven-10-years.html | CC-MAIN-2016-40 | refinedweb | 269 | 73.37 |
Compiling C DLLs and using them from PerlDecember 4th, 2006 at 8:58 pm
A few months ago I managed to control a National Instruments Digital IO card (sitting in a PCI slot in my PC) from Perl. I accomplished this by installing the Win32::API module, and loading the card’s .dll API. I had a few struggles with Win32::API as some things weren’t obvious, but after some searching and good advice from Perlmonks, it worked.
Today I had another encounter with Win32::API. I have some C code I want to access from Perl. So, I compiled it in Visual C++ into a DLL, but Win32::API kept segfaulting, although loading the same DLL from another C++ program worked fine. Another round of investigation began…
To cut a long story short, here is the correct way to compile C code into a DLL and access it from Perl.
The C code
I’m going to write a simple C function that demonstrates some interesting concepts like passing data in and out with pointers. Here is the .h file:
int __stdcall test1(char* buf, int num, char* outbuf);
This is a (almost) normal declaration of a function named
test1 that takes two pointers to a char and one integer as arguments, and returns an integer.
__stdcall is a Visual C++ compiler keyword that specifies the stdcall calling convention. The stdcall convention is used by Windows API functions.
There’s another common calling convention -
__cdecl which is usually used for “normal” (not Windows API) code. The Win32::API Perl module supports only __stdcall, so while we could use __cdecl for binding this DLL to another piece of C / C++ code, it doesn’t work with Win32::API.
The .c file provides the implementation:
#include "dll_test.h" int __stdcall test1(char* buf, int num, char* outbuf) { int i = 0; for (i = 0; i < num; ++i) { outbuf[i] = buf[i] * 3; } return num; }
DEF file
A module definition (.def) file provides the linker with information about exported symbols, which is useful when writing DLLs. I create a new text file, name it dll_test.def and put it into the project directory:
LIBRARY DLL_TEST.DLL EXPORTS test1
In this file I specify the library name, and the name of the exported function (several names appear on separate lines). Now this .def file should be given as an option to the linker. Add
/DEF dll_test.def as a linker option, or provide “dll_test.def” in the “Module definition file” field (Input category) in the project properties (Linker options).
After this, build the project and the DLL will be created.
Without the DEF file ?
It is possible to create the DLL without using the .def file. If you prepend
__declspec(dllexport) to the function declaration, the linker will export it without consulting the .def file. While this works well in C++ code calling the functions from the DLL, this method isn’t recommended when using Win32::API, because
__stdcall mangles the names of functions and it may be difficult (though possible) to import them to Perl. The DEF file instructs the linker to create an unmangled name for the function, in spite of using
__stdcall, so it is the preferred method.
In any case, the
dumpbin command line tool (built into Windows) allows to see the names of exported functions in a DLL by calling:
dumpbin /exports
The Perl code
Finally, we can use Win32::API to import the C function we created from the DLL and use it:
use warnings; use strict; $|++; use Win32::API; # Import the test1 function from the DLL # my $test1 = Win32::API->new('dll_test', 'test1', 'PNP', 'N'); die unless defined $test1; # the input must be a buffer of bytes, # so we use pack # my $buf = pack('C*', (1, 2, 3, 4, 5)); # allocate space for the output buffer # my $outbuf = ' ' x 5; # Call the imported function # my $ret = $test1->Call($buf, 5, $outbuf); # Results # print "Returned $ret\n"; print join ' ', unpack('CCCCC', $outbuf), "\n";
P.S.
A good discussion of this topic is given in this Perlmonks thread.
Related posts:
December 26th, 2006 at 14:13
To solve the mangaling issue you could wrap the exported functions declarations with extern “C”. | http://eli.thegreenplace.net/2006/12/04/compiling-c-dlls-and-using-them-from-perl/ | crawl-002 | refinedweb | 703 | 59.43 |
ran into a problem tracking down a bug in an XQuery library I was
writing, where Saxon-SA was returning an error message, but not telling
me where the fault was occuring.
The answer may simple by 'well next time turn on -T', but perhaps it
might be considered possible and appropriate for Saxon to emit more
details in this case.
I wrote a stupid bug in an XQuery library module where I forgot to test an
edge case, where a function can return an element() or (), and I forgot
to test for the () case.
So I was basically taking 'let $e := ()' and trying to pass it to
element { node-name($e) } { ... }
and naturally this Caused Problems. But the only error message I got
back was simply 'Query processing failed: Run-time errors were reported',
without location information. :(
This is effectively a 'hey idiot, remember to check for the null case',
but humans being human I'm sure it'll happen again. Turning on -T lets
one figure out where the error occurs of course, and perhaps that should
be the way things are -- I just fear a case where code is running in a
production environment and every once in a while some subtle edge case
falls into this trap: tracking such a bug down might
prove difficult to reproduce in order to trace it.
Idiot Code Example:
declare namespace f="uri.test";
declare function f:empty-return()
as element()? {
if (doc-available("I_do_not_exist"))
then doc("I_do_not_exist")/*
else ()
};
declare function f:process-data($element as element()?)
as element()?
{
let $e := f:empty-return()
(: oops, forgot 'where exists($e)' :)
return (
element {node-name($e)} {
$e/@*, $e/*, <foo/>
}
)
};
declare function f:run() {
f:process-data(f:empty-return())
};
f:run()
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
James A. Robinson jim.robinson@...
Stanford University HighWire Press
+1 650 7237294 (Work) +1 650 7259335 (Fax)
View entire thread | https://sourceforge.net/p/saxon/mailman/message/13252967/ | CC-MAIN-2017-39 | refinedweb | 309 | 58.11 |
The QDeepCopy class is a template class which ensures that implicitly shared and explicitly shared classes reference unique data. More...
All the functions in this class are reentrant when Qt is built with thread support.
#include <qdeepcopy.h>
List of all member functions.DeepCopy<QString> global_string; // global string data QMutex global_mutex; // mutex to protext global_string ... void setGlobalString( const QString &str ) { global_mutex.lock(); global_string = str; // global_string is a deep copy of str global_mutex.unlock(); } ... void MyThread::run() { global_mutex.lock(); QString str = global_string; // str is a deep copy of global_string global_mutex.unlock(); // process the string data ... // update global_string setGlobalString( str ); }.
Constructs an empty instance of type T.
Constructs a deep copy of t.
Returns a deep copy of the encapsulated data.
Assigns a deep copy of t.
This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved. | http://doc.trolltech.com/3.1/qdeepcopy.html | crawl-002 | refinedweb | 140 | 63.66 |
cc[ flag... ] file... -lldap[ library... ]
#include <lber.h>
#include <ldap.h>
The cldap_search_s() function
performs an LDAP search using the Connectionless LDAP (CLDAP) protocol.
cldap_search_s() has parameters and behavior identical to that of ldap_search_s(3LDAP), except for the addition of the logdn parameter.
logdn should contain a distinguished name to be used only for logging purposed by the LDAP server. It should be in the text format described by RFC 1779, A String Representation of Distinguished Names.
cldap_search_s() operates using the CLDAP protocol over udp(7P). Since UDP is a non-reliable protocol, a retry mechanism is used to increase reliability. The cldap_setretryinfo(3LDAP) function can be used to set two retry parameters: tries, a count of the number of times to send a search request and timeout, an initial timeout that determines how long to wait for a response before re-trying. timeout is specified seconds. These values are stored in the ld_cldaptries and ld_cldaptimeout members of the ld LDAP structure, and the default values set in ldap_open(3LDAP) are 4 and 3 respectively. The retransmission algorithm used is:
Assume that the default values for tries and timeout of 4 tries and 3 seconds are used. Further, assume that a space-separated list of two hosts, each with one address, was passed to cldap_open(3LDAP). The pattern of requests sent will be (stopping as soon as a response is received):
Time Search Request Sent To:
+0 Host A try 1
+1 (0+3/2) Host B try 1
+2 (1+3/2) Host A try 2
+5 (2+6/2) Host B try 2
+8 (5+6/2) Host A try 3
+14 (8+12/2) Host B try 3
+20 (14+12/2) Host A try 4
+32 (20+24/2) Host B try 4
+44 (20+24/2) (give up - no response)
cldap_search_s() returns LDAP_SUCCESS if a search was successful and the appropriate LDAP error code otherwise. See ldap_error(3LDAP) for more information.
See attributes(5) for a description of the following attributes:
ldap(3LDAP), ldap_error(3LDAP), ldap_search_s(3LDAP), cldap_open(3LDAP), cldap_setretryinfo(3LDAP), cldap_close(3LDAP), attributes(5), udp(7P) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3ldap/cldap_search_s.3ldap.html | CC-MAIN-2016-30 | refinedweb | 352 | 58.52 |
Programming and Bloom's Taxonomy
Michael Taggart
・7 min read
Originally appeared on my blog
You can't get near education theory in the 21st century without Benjamin Bloom's categorization of cognitive activities. Yes, there are accompanying taxonomies for affective and psychomotor activities, but for now, I'm focusing on the cognitive set. Let's all recite them together:
- Remembering
- Comprehending
- Applying
- Analyzing
- Synthesizing
- Evaluating
Amen. These are ordered by cognitive complexity, and as it so happens, map extremely well onto the process of learning the skills of computer programming. Perhaps that's obvious: a framework for describing learning describes how you learn something. Still, I'd argue that the congruence is remarkable. Some subjects engage with many of these levels at once, asking students to analyze, synthesize, and evaluate all at once. In programming, it is difficult to approach a higher echelon before mastering the ones beneath.
Let me explain with my own experience learning these skills.
Remembering
There are ways in which learning to program is like learning a language—the symbols and syntax of each programming language is unique, and it takes time to memorize these sufficiently to produce code that is not riddled with errors.
Here's a bit of code in Python—my first language—that checks whether a number is even or odd:
def is_even(n): if n % 2 == 0: return True elif n % 2 != 0: return False
I am willing to bet that even if you've never read a line of Python before, you can make some sense of that. The modulus (
%) returns the remainder of integer division, so essentially I'm saying:
If, when dividing
nby 2, I get zero,
nreally is even.
Otherwise, if I get anything that isn't zero,
nis a phony, a fake. A flim-flammer.
This bit of Python is syntactically perfect. Of course, it took me days to be able to rattle off something like this without constantly checking the reference documentation. But the syntax, the symbols, were the first thing I had to learn. I had to remember what Python was even supposed to look like.
But you know what? This code is partially nonsense. It's a little like a Chomsky sentence: structurally sound, but functionally useless. To fix it, I had to comprehend something more about the task. In this case, something about the nature of functions, and the nature of if/else statements.
Comprehending
"If it's sunny, then I'll go to the beach. Otherwise, I'll go to the movies."
if sunny == True: go_to_beach() else: go_to_movies()
A contrived example perhaps, but that's how conditionals work. We perform tests on our information, and then we execute different commands accordingly. You can see how the same structure exists in English, and maps well onto Python's syntax. But you'll also notice that the second part of the code is an
else, not an
elif like above. Conditionals, if they have more than one option, must end with
else; it's the default behavior. And I didn't comprehend why when I wrote the syntactically-correct version above, it was yelling at me.
Once I comprehended the rule that conditionals must end in
else, I rewrote it as:
def is_even(n): if n % 2 == 0: return True else: return False
But I still had more to comprehend. This time, about functions.
That
def is_even(n): line above is the start of a function. Like in math, functions take information in and spit something back out. That's what the
return symbol is all about: spitting back either
True or
False based on whether the number is even. Functions can only return one thing—after they do a return, they stop until they're "called again." So if I have
4 to my
is_even function, it would test to see if
4 is evenly divided by
2, and if so, return
True. The
elif ("else if") part of the code isn't even touched in that case.
Turns out, I don't even need the
else! Just a single
if will do, because if that test passes,
return ensures the rest of the function isn't run. So my final form looks like:
def is_even(n): if n % 2 == 0: return True return False
A tad more elegant, no? Concision isn't necessarily a goal for programmers (readability is), but it tends to result from a deep understanding of the tools at one's disposal.
UPDATE: Bob accurately points out that the following:
def is_even(n): return n % 2 == 0
...is even more elegant. I often, when writing teaching code, leave the last step of elegance to preserve clarity. Still, the above is worth understanding as an improvement.
Applying
Certainly writing a function to tell even numbers from odd is a bit contrived. Nevertheless, the patterns I learned in this exercise would become invaluable in the countless functions I went on to write.
If we zoom out from the specifics of evenness/oddness, what is this function doing? It's determining the nature of an object, sorting it into one of two categories. Of course in human life, binary states are rare; life is shades of gray. But in pure reason, it is often the case that we must determine whether something is or is not. Every time we do, this pattern emerges. With a firm comprehension of the form of this reasoning, of its pattern, I was able to apply it to other problems.
Analyzing
As a writer and English teacher, I took pride in my proofreading abilities. I'd find that missing Oxford comma, that malapropistic "its." Reflecting on this ability, I'm certain it derives from sufficient practice, yes, but also sufficient reading such that my comprehension of the "rules" of language made violations of the rules readily apparent.
A typo or grammatical error can be irritating. A program that doesn't run because of a logical flaw or a typo (it can be difficult to tell one from the other) is infuriating.
Here's a line I feed to all my students: programming is a uniquely frustrating discipline, because most of the time you spend working on a problem, it feels like you're failing. Once it works, you stop and move on to the next. And when you get it wrong, you know right away. No craft so ruthlessly exposes your own logical flaws.
Enticed yet?
Okay so to properly analyze one's own code or someone else's, the first two tiers of the Taxonomy must be well in-hand.
Ah, but analysis in the realm of programming also refers to the problem domain. A programmer has to understand the operating parameters and solve within them. To build a bridge, one must first read the terrain, measure the gorge, and plan carefully. It is in analysis where programming begins to resemble engineering in its thought processes.
Synthesizing
The examples so far have been rather simplistic, in part because they can afford to be. And even in most programming curricula, the tasks set students will be one- or two-dimensional up to the level of synthesis: write a function that takes X and returns Y; implement function A to integrate with output from function B. You'll find significant amounts of scaffolding in the early chapters of a programming curriculum. In order to give beginners anything like an authentic experience, the tasks set to them have to be contextualized in ways that resemble what they'll encounter later in their journey. Real software applications have a lot of moving parts. It's a Rube Goldberg machine up in there.
So by the time students reach the synthesis phase, they can code a multidimensional project. A real application.
Or, y'know, direct an OK Go video.
Evaluating
Up here in the thin air of Bloom's, we have evaluation. In programming—and perhaps in writing—this is where style emerges. The programmer has a strong enough foundation in their craft to begin making choices about how to approach a given problem. Shall we use object-oriented or functional patterns? Would a stack or queue data structure make more sense? Should I attempt a recursive algorithm?
Choice may even extend into the programming language for a given problem domain. With broad experience comes a broad array of options, and programming languages are just tools.
But how could a programmer make such choices without experiencing extending to fully completed projects. Put another way, programmers must have completed a synthesis before ascending to evaluation.
Beyond Bloom
Bloom's Taxonomy fits the programmer's journey well enough, as far as it goes. However, one criticism of the Taxonomy is its hyper-individualism. It's a curious model, given the social nature of, well, humanity in general, but certainly of learning. Any Harkness teacher will tell you that deep learning is a social enterprise, yet Bloom functionally elides discourse and dialectic in the Taxonomy. One might argue that the emotive component of the Taxonomy speaks to it, but emotional activities are specifically distinct from the cognitive, yet intellectual discussions are definitionally cognitive.
So I have beef with Bloom here, especially when it comes to programming, since programmers no longer work in solitude. To fully master the craft, a programmer must join a community of others, sharing experiences and code in an open marketplace of ideas.
Sharing
It takes courage to publish one's code to the internet, open to the scrutiny of others. Yet millions do this every day. We'll talk more about open source tomorrow, but joining the community of programmers is an integral part of mastery. Learning how to discuss issues, contribute helpfully on projects, and hold oneself accountable for assigned work as part of a team transforms programming from craft to profession.
Community-Building
Lastly, the young programmer will hopefully start their own open source project that develops a community of contributors around it. Here, the programmer is now also a connector of people, responsible for the communal bonds that develop around the common goal of the project. Success here is no longer situated in the craft of programmer, but the more holistic human skills we hope to impart to all students.
In short, the full path of the programmer is not so different from other skills or disciplines: mastery makes one, through the hard work along the way, a good person.
Do you have any energy and time for your personal goals after a full day of work at your job?
><<
def isEven(n): return n % 2 == 0;
Totally right! This is where writing teaching code rather than maximally elegant code sometimes produces different results.
Sorry I should have explained why I posted that.
If you have a piece of code of the form:
Be aware that
conditionis a Boolean already, by virtue of fitting in the if statement. It's not elegance I wanted to demonstrate, I just wanted to raise type awareness.
I'll be keen to write a little explanation if I'm ever tempted to reply with a line of code like this :)
I saw you updated the article. You're swift with feedback, kudos :) | https://dev.to/mttaggart/programming-and-blooms-taxonomy-85g | CC-MAIN-2020-10 | refinedweb | 1,863 | 63.49 |
This question already has an answer here:
What I need is to overload Operator + in C# so I can sum 2 matrixes.
What I have is this function:
public int[,] operator+(int[,] matriz1, int[,] matriz2) { int[,] retorno = new int[4, 4]; for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { retorno[i, j] = matriz1[i, j] + matriz2[i, j]; } } return retorno; }
When I do this for example (WT, W1, W2 are all int[4,4]):
WT = W1 + W2;
I get an error saying: operator + cannot be applied to operands of type int[,] and int[,], what am I doing wrong and how can I solve it?
You can't write operators for non-custom types:
Operator overloading permits user-defined operator implementations to be specified for operations where one or both of the operands are of a user-defined class or struct type.
An alternative might be to write an extension method:
public static class MatrixExtensions { public static int[,] Add(this int[,] matriz1, int[,] matriz2) { int[,] retorno = new int[4, 4]; for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) { retorno[i, j] = matriz1[i, j] + matriz2[i, j]; } } return retorno; } }
And use it like this:
int[,] a = ... int[,] b = ... int[,] c = a.Add(b);
It's not working in this case because operator overloading has to be done for the class it works on and in this case you're only overloading it for whatever class this method is contained within. Basically, when you do W1 + W2 it looks for a '+' operator defined for
int[,], which doesn't exist for that built-in type.
For it to work on matrices as you're trying to do, you would need to make a
Matrix class (perhaps internally storing its values as a 2d integer array as you're doing) and then overload the operator for that class.
Well, for one, before you even try to use the operator, try compiling just the operator method. It won't compile, and the error message is telling:
error CS0563: One of the parameters of a binary operator must be the containing type
This means exactly what it says: if you write a class
C, you can write an addition operator for
C + <any type> (or
<any type> + C). You cannot define an operator that does not involve
C in some way. So, simply, since you're not the one writing the
int[,] class, you can't define an operator for it.
Your best bet is probably to define a
Matrix class yourself, then you can define whatever operators you want on it. For instance;
public class Matrix { private readonly int[,] _values; public int this[int x, int y] { get { return _values[x, y]; } } public Matrix(int[,] values) { _values = values; } public static Matrix operator +(Matrix x, Matrix y) { int[,] m0 = x._values; int[,] m1 = y._values; int[,] newMatrix = /* add m0 and m1 */; return new Matrix(newMatrix); } }
Similar Questions | http://ebanshi.cc/questions/1225907/matrix-overload-operator-c-sharp | CC-MAIN-2017-51 | refinedweb | 493 | 50.7 |
Working with the call records API in Microsoft Graph
Call records provide usage and diagnostic information about the calls and online meetings that occur within your organization when using Microsoft Teams or Skype for Business. You can use the call records APIs to subscribe to call records and look up call records by IDs.
The call records API is defined in the OData sub-namespace,
microsoft.graph.callRecords.
Key resource types
Call record structure
The callRecord entity represents a single peer-to-peer call or a group call between multiple participants, sometimes referred to as an online meeting.
A peer-to-peer call contains a single session between the two participants in the call. Group calls contain one or more session entities. In a group call, each session is between the participant and a service endpoint.
Each session contains one or more segment entities. A segment represents a media link between two endpoints. For most calls, only one segment will be present for each session, however sometimes there may be one or more intermediate endpoints.
In the diagram above, the numbers denote how many children of each type can be present. For example, a 1..N relationship between a callRecord and a session means one callRecord instance can contain one or more session instances. Similarly, a 1..N relationship between a segment and a media means one segment instance can contain one or more media streams. | https://docs.microsoft.com/en-us/graph/api/resources/callrecords-api-overview?view=graph-rest-1.0 | CC-MAIN-2021-39 | refinedweb | 236 | 64.61 |
Support »
Pololu Dual VNH5019 Motor Driver Shield User’s Guide
View document on multiple pages.
View this document as a printable PDF: dual_vnh5019_motor_driver_shield.pdf
- 1. Overview
-
- 2. Contacting Pololu
- 3. Getting Started with an Arduino
-
- 4. Using as a General-Purpose Motor Driver
-
- 5. Schematic Diagram
- 6. Customizing the Shield
-
- 7. Using the Driver in Single-Channel Mode
- 8. Differences between board revisions
1. Overview
This user’s guide focuses on the latest version (ash02b) of the Pololu dual VNH5019 motor driver shield, but most of the information also applies to the earlier ash02a version. See Section)
- chipKIT Max32 Arduino-Compatible Prototyping Platform (PIC32-based Arduino clone)
- ~23 A. The Due should generally be able to handle this since the MCU’s integrated protection diodes will clamp the input voltage to a safe value (and since the CS circuit has a 10 kΩ resistor in series with the output, only a few hundred microamps at most will flow through that diode). However, if you really want to be safe, you can use a 3.3 V zener diode to clamp the current sense output voltage to a maximum of ~3.3 V. If you want to get the full range of current feedback while using the Due, you can disconnect the shield’s current sense pins from the Due and then.
The other through-holes on the shield are used for more advanced things like customizing the Arduino pin mappings and are not necessary for getting started using this shield with an Arduino. They are discussed in more detail later in this guide.. Logic power, VDD, is automatically supplied by the Arduino.
Note that the motor driver features over-voltage protection that can activate.5 – 24 V probably will not be available. For example, the recommended operating voltage of the Arduino Uno is 7 – 12 V. VNH5019VNH5019MotorShield > Demo
from the Arduino IDE, or by copying the following code into a new sketch:
#include "DualVNH5019MotorShield.h" DualVNH5019MotorShield md; void stopIfFault() { if (md.getM1Fault()) { Serial.println("M1 fault"); while(1); } if (md.getM2Fault()) { Serial.println("M2 fault"); while(1); } } void setup() { Serial.begin(115200); Serial.println("Dual VNH5019 Motor Shield"); md.init(); } void loop() { for (int i = 0; i <= 400; i++) { md.setM1Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M1 current: "); Serial.println(md.getM1CurrentMilliamps()); } delay(2); } for (int i = 400; i >= -400; i--) { md.setM1Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M1 current: "); Serial.println(md.getM1CurrentMilliamps()); } delay(2); } for (int i = -400; i <= 0; i++) { md.setM1Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M1 current: "); Serial.println(md.getM1CurrentMilliamps()); } delay(2); } for (int i = 0; i <= 400; i++) { md.setM2Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M2 current: "); Serial.println(md.getM2CurrentMilliamps()); } delay(2); } for (int i = 400; i >= -400; i--) { md.setM2Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M2 current: "); Serial.println(md.getM2CurrentMilliamps()); } delay(2); } for (int i = -400; i <= 0; i++) { md.setM2Speed(i); stopIfFault(); if (i%200 == 100) { Serial.print("M2 current: "); Serial.println(md.getM2CurrentMilliamps()); } delay(2); } }
This example ramps motor 1 speed from zero to max speed forward, to max speed reverse, and back to zero again over a period of about 3 seconds, while checking for motor faults and periodically printing the motor current to the serial monitor. It then performs the same process on motor 2 before repeating all over again.
Note: Even if you don’t have any motors yet, you can still try out this sketch and use the motor indicator LEDs for feedback that the shield is working properly..
4.a. Assembly for Use as a General-Purpose Motor Driver
- Logic connections: The 13 small holes along the left side of the board, highlighted in red in the above diagram, are used to interface with the motor drivers. You can optionally solder a.
With the exception of the pins labeled “MxEN A=B” and “MxCS_DIS”, all of the through-holes not highlighted in the above diagram are only relevant when using this driver as an Arduino shield. The “MxEN A=B” and “MxCS_DIS” pins are explained in the “Pinout” portion of Section 4.b, but they will not be needed in typical applications and can generally be ignored..
5. Schematic Diagram
This schematic is also available as a downloadable pdf: dual VNH5019 motor driver shield schematic (356k pdf)
This schematic is for the latest version of the shield (ash02b).. hole of the pair to create a new connection. You can later use shorting blocks to restore the default pin mapping if you populate the severed hole pairs with 2×1 pieces of the included 0.1″ male header strip..
7. Using the Driver in Single-Channel Mode
The dual VNH5019 motor driver shield uses two VNH5019 motor driver ICs to enable independent control of two bidirectional brushed DC motors, and each motor channel by itself is capable of delivering up to 12 A of continuous current while tolerating brief current spikes up to 30 A. If you need more power than this, however, you can combine the).
8. Differences between board revisions
There have been two versions of the dual VNH5019 motor driver shield. The revision identifier, ash02a or ash02b, can be found printed near the upper left corner of the board (below the mounting hole). While either version should generally be a drop-in replacement for the other, there are some minor differences between the two versions.
The first revision, released in October 2011, was ash02a.
This version predated the Arduino Uno R3, so it lacked pass-throughs for the four new pins present on the R3 and all newer Arduinos.
Schematic diagram for revision ash02a (87k pdf)
The second and latest revision, released in March 2014, is ash02b.
Compared to ash02a, the following changes were made:
- Pass-throughs were added for the four additional pins (SCL, SDA, IOREF, and an unused pin) on the Arduino Uno R3 and all newer Arduinos.
- The additional capacitor through-holes, the Arduino power jumper, and the upper block of pin-remapping jumpers (for Arduino pins 8-12) were moved slightly to make room for the new pass-throughs.
- Different reverse-protection MOSFETs are used. (This does not significantly change the electrical or thermal performance of the shield.)
Schematic diagram for revision ash02b (356k pdf) | https://www.pololu.com/docs/0J49/all | CC-MAIN-2015-22 | refinedweb | 1,053 | 57.47 |
I am trying to import a project developed with MyEclipse 6.0 IDE into Intellij 15.0.3.
I am unable to import the following packages
import javax.ejb.CreateException;
import javax.ejb.EJBHome;
Failed to download '':Cannot download '':Connection closed at byte 969. Expected 7713 bytes. response: 200 OK
The comp on which you develop should be connected to internet to allow outgoing connections. If you are using proxy to connect to internet then check the proxy settings in the browser. Below is the button that you can use to configure proxy settings to allow IDEA to connect to the internet.
The file
is downloadable from the browser, just copy/paste it to the navigator bar and start downloading. | https://codedump.io/share/wmeBxfY72XxI/1/importing-existing-struts-1-application-developed-using-myeclipse-to-intellij-idea-1503 | CC-MAIN-2017-51 | refinedweb | 120 | 60.11 |
I am experiencing an issue very similar to the one found here:
whereas as mod_python application will return a blank screen upon the
calling of parseString. Example:
from xml.dom.minidom import parseString
def index(req):
parseString("<info>stuff</info>")
return "blah"
The funny thing is, I am using Python 2.5 with expat 2.0.0, and therefore
the expat bug should not apply, correct? Apache is using expat 1.95.7.
Would that make a difference? Perhaps it is something completely
different. Also, importing expat works just fine:
import pyexpat
def index(req):
return "blah"
and it returns what it should. What else can I try to narrow down the
issue? Or, is there some other documentation that someone can point me to?
I can't seem to find anything else.
Thanks in advance!
Joseph Bernhardt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: | http://modpython.org/pipermail/mod_python/2008-May/025321.html | CC-MAIN-2017-51 | refinedweb | 146 | 70.29 |
Opened 11 years ago
Closed 11 years ago
Last modified 9 years ago
#128 closed defect (worksforme)
IndexError when using invalid {% extends %} in template should be replaced with better error message.
Description
First of all the error
There's been an error: Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 190, in get_response return callback(request, **param_dict) File "/home/espen/django/blog/apps/blog/views/blog.py", line 20, in details return HttpResponse(t.render(c)) File "/usr/lib/python2.4/site-packages/django/core/template.py", line 116, in render return self.nodelist.render(context) File "/usr/lib/python2.4/site-packages/django/core/template.py", line 437, in render bits.append(node.render(context)) File "/usr/lib/python2.4/site-packages/django/core/template_loader.py", line 80, in render parent_is_child = isinstance(compiled_parent.nodelist[0], ExtendsNode) IndexError: list index out of range
The template file that trys to extend base.html:
{% extends "base" %} {% block title %} Espen Grindhaug - Blog - {{ obj.headline }} {% endblock %} {% block content %} <h1>{{ obj.headline }}</h1> <p class="summary">{{ obj.summary }}</p> <p class="body">{{ obj.body }}</p> <p class="by">By {{ obj.author }} ({{ obj.pub_date }})</p> {% endblock %}
The weird thing here is that I get exactly the same error with completly different child template.
base.html (the same as in URL):
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xml: <head> <link rel="stylesheet" href="style.css" /> <title>{% block title %}Espen Grindhaug{% endblock %}</title> </head> <body> <div id="sidebar"> {% block sidebar %} <ul> <li><a href="/">Home</a></li> <li><a href="/blog/">Blog</a></li> </ul> {% endblock %} </div> <div id="content"> {% block content %}{% endblock %} </div> </body>
The details function in blog.py that loads the template:
def details(request, slug): try: obj = entries.get_object(slug__exact=slug) except entries.EntryDoesNotExist: raise Http404 t = template_loader.get_template('blog/details') c = Context(request, { 'obj':obj,}) return HttpResponse(t.render(c))
Change History (9)
comment:1 Changed 11 years ago by adrian
comment:2 Changed 11 years ago by espen@…
- Resolution set to fixed
- Status changed from new to closed
The simple solution was to change {% extends "base" %} to {% extends "blog/base" %}
*a little bit ashamed* :/
comment:3 Changed 11 years ago by jacob
- Resolution fixed deleted
- Status changed from closed to reopened
- Summary changed from IndexError when using {% extends "base" %} in template. to IndexError when using invalid {% extends %} in template should be replaced with better error message.
That error message is extremely unhelpful; I'm reopening this ticket and changing the title to indicate that we should fix the error message.
comment:4 Changed 11 years ago by adrian
- milestone set to Version 1.0
comment:5 Changed 11 years ago by adrian
- Status changed from reopened to new
comment:6 Changed 11 years ago by adrian
- Status changed from new to assigned
comment:7 Changed 11 years ago by adrian
- Resolution set to worksforme
- Status changed from assigned to closed
I set about fixing this error message, but I can't reproduce it. I get a "Template 'base' cannot be extended, because it doesn't exist" message, which is expected behavior. Please reopen if you can reproduce the strange IndexError.
comment:8 Changed 11 years ago by espen@…
I can't reproduce it any more either.
comment:9 Changed 9 years ago by anonymous
- milestone Version 1.0 deleted
Milestone Version 1.0 deleted
Very strange. Is your base template called "base.html", and is it within your template root, rather than being in the "blogs" subdirectory of the template root? | https://code.djangoproject.com/ticket/128 | CC-MAIN-2016-26 | refinedweb | 595 | 50.33 |
Azure Container Instances (ACI) allows you to spin up containers in the cloud without creating or managing any kind of infrastructure. It’s really bringing container as a service to you. That said, ACI is no replacement for Container Orchestration Systems like Kubernetes. It’s more an underlying infrastructure for hosting/running containers itself.
ACI integrates seamlessly with orchestrators like Kubernetes. You can use the ACI connector for Kubernetes to create an ACI node on your Kubernetes cluster. See this repo for more details.
When you deploy Pods to ACI through Kubernetes, you may recognize that all Pods stuck in state Pending when they’re meant to be deployed to the
aci-connector node (aci-connector is the default name for the ACI node). You may ask why all pods stuck at this stage. To get more insights let us examine the cluster a bit
First, let’s display all the Pods in the
default namespace using
kubectl get pods
There should be one pod named
aci-connector-<SOME_ID> That’s the Pod which connects Kubernetes to ACI. Yes, a Pod. The Pod is mimicking the
kubelet interface and registers itself as a unlimited node on your cluster. Giving this background, let’s get some insights from the
aci-connector-<SOME_ID> Pod by executing:
kubectl logs aci-connector-<SOME_ID>
You may see a message like “The subscription is not registered to use namespace ‘Microsoft.ContainerInstance”.
I had the same issue with one of my Azure subscriptions, for others it just worked as expected.
As the error message indicates, namespaces are associated with Azure Subscriptions. Azure CLI allows you to manage namespaces using
az providers. Ensure you’ve selected the proper Azure subscription by invoking
az account set --subscription <SUBSCRIPTION_ID>. Now let’s review the registration state of the particular namespace.
az provider show -n Microsoft.ContainerInstance
It should return a status
NotRegistrered. Because we want to use ACI, we’ve to register the namespace in the current subscription by executing:
az provider register -n Microsoft.ContainerInstance
After a couple of seconds,
az provider show -n Microsoft.ContainerInstance should return with status
Registered.
Move over to your Kubernetes cluster, delete and re-create the Pod. Now it should spin up the Pod as expected. Use
kubectl get po -o wide to verify if your Pod is deployed to the
aci-connector node. | https://thorsten-hans.com/fix-pods-stuck-in-pending-on-azure-container-instances | CC-MAIN-2019-18 | refinedweb | 391 | 57.16 |
This article provides a simple sample of an application that clicks a button in
another application. The technique could be used for other controls too. For the
purpose of a simple sample the program is a console program. The sample will
click the "=" button in the Windows Calculator Acesssory.
Note that I am using Windows 7; if the Windows Calculator Acesssory in Windows 8
is different and/or cannot be used in the manner it is used here then I hope you
can find another program to use.
The first thing to do is to determine what button to click and the circumstances
of when to click it. Execute the Windows Calculator Acesssory. Type "2+2=" into
it. The result should be 4 of course. Then click the "=" button. Each time "="
is clicked, the value will be increased by 2. So that is something very simple
to automate. We can "manually" execute Calculator and type "2+2=" into it. Then
when we execute our sample application it can just click the "=" button and we
can see the result.
The next thing to do is to determine the identity of the button to be clicked so
our program can click it. With the Windows Calculator Acesssory executing, go to
Visual Studio. In the "Tools" menu is "Spy++". If your system is a 64-bit system
then you might need to use the 64-bit version of Spy++. The Spy++ toolbar will look
like:
Click on the "Find Window" icon; it is a binoculars with a window in the
top-left; it looks like:
That will open a "Find Window" dialog. Now first, ensure that Calculator is
visible; at least the "=" button part of it. Then click on the "Finder Tool"
icon (the square with a circle in it that looks like a target). The "Find Window" dialog looks like:
With the mouse button down, drag from the Finder Tool icon to the "=" button and
release the mouse. The Find Window dialog will be filled in with the "=" button's
handle and other data. Click on "OK". You will then get a "Window Properties"
window with a tab control with five tabs. The five tabs are General, Styles,
Windows, Class and Process. The wndow will look like:
Look for "Contol ID" near the botom of the first (General) tab. It will probably
have the value "00000079". It is a hexadecimal value. Whatever the value is, it
will probably be that value every time you execute that program. At the Windows
API level, controls are often identified by a control id. We can use the Contol
ID shown in the Window Properties window in our program.
We are ready to write the application. I assume you can create a console
application. If your system is 64-bit then you will need to change the "Platform
Target" to "x64". The .Net Process class cannot do everything we need it to do
when our application is not the same as the other application in terms of bits
(32-bits and 64-bits). The "Platform Target" property is in the Build tab of the
project's properties. If you need to automate a 32-bit application from a
64-bit application or 64-bit from 32-bit then you can do that but you will need
to use the Windows API directly to do the equivalent of what we are using the
.Net Process class to do.
You need to add a "using" for the "System.Diagnostics" and
"System.Runtime.InteropServices" namespaces. You will also need to add the
following to the "Program" class:
const int WM_COMMAND = 0x0111;
const int BN_CLICKED = 0;
const int ButtonId = 0x79;
const string fn = @"C:\Windows\system32\calc.exe";
[DllImport("user32.dll")]
static extern IntPtr GetDlgItem(IntPtr hWnd, int nIDDlgItem);
[DllImport("user32.dll")]
static extern IntPtr SendMessage(IntPtr hWnd, int Msg, int wParam, IntPtr lParam);
The sample program will first look at the processes to find Calculator. We will
assume the exe is at:
C:\Windows\system32\calc.exe
When we find a process that is executed from there, we know for sure it is the
one we are looking for. Note that there are very many processes in a system;
many more than most people are aware of. Each Windows Service is a process.
Windows Services and other processes that we need to ignore do not have a
window, so we can ignore all processes without a window. The following code will
do all that:
IntPtr handle = IntPtr.Zero;
Process[] localAll = Process.GetProcesses();
foreach (Process p in localAll)
{
if (p.MainWindowHandle != IntPtr.Zero)
{
ProcessModule pm = GetModule(p);
if (pm != null && p.MainModule.FileName == fn)
handle = p.MainWindowHandle;
}
}
if (handle == IntPtr.Zero)
{
Console.WriteLine("Not found");
return;
}
The GetModule function is shown later in this article.
We are ready to click the button from within our application. To click the button we
will send a BN_CLICKED notification message to the button's parent,
the Calculator main window. To do that we will need both the
button id (as in the preceding) and the handle of the button. The handle varies
for each execution but we can determine the handle based on the handle of its
parent and the control id. The Windows API function GetDlgItem does that, as in:
IntPtr hWndButton = GetDlgItem(handle, ButtonId);
Windows messages are sent to a window as determined by the window handle.
Messages also have a message id that specifies what the message is for. We will
send a WM_COMMAND message. The other two parameters we need to send the message
is a wParam and a lParam. The value of those depend on what the message is (the
message id). For a WM_COMMAND message doing what we are doing, the control id is
passed as the high word of wParam and the low word of wParam is the notification
code (BN_CLICKED). The lParam is the handle of the control. It seems unnecessary
to pass both the control id and the control window's handle but that is the way
it works. The following sends the message:
int wParam = (BN_CLICKED << 16) | (ButtonId & 0xffff);
SendMessage(handle, WM_COMMAND, wParam, hWndButton);
When the processes are being searched, since some modules might be restricted
from access for security purposes, we catch errors using the GetModule function.
It is very simple, as in the following:
private static ProcessModule GetModule(Process p)
{
ProcessModule pm = null;
try { pm = p.MainModule; }
catch
{
return null;
}
return pm;
}
To test the program, execute Calculator and type in "2+2=". Then execute the
sample program. Every time it executes, 2 will be added to the Calculator value.
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/SamTomato/clicking-a-button-in-another-application/ | CC-MAIN-2014-49 | refinedweb | 1,109 | 65.62 |
Procedurally generated meshes have several uses in game development:
- You can use them for crisp, non-standard UI components.
- You can use them to render lines that look correct if your engine (in this case, Unity) cannot.
- You can make intricate mathematical objects that may be difficult to do by hand.
- You can use them to create worlds, creatures, and props.
Making a mesh is not always the right solution, but in Unity, it is surprisingly easy, and in this post I describe the basics.
Related Posts
To make a custom mesh, you need at the very least two things: where the vertices of the mesh are, and how the mesh is triangulated, that is, which vertices make up each triangle of the mesh.
The images above show two meshes of a cube with the same vertices, but different triangulations. I also assigned a unique index to each vertex. This makes it possible to say things like “vertex 3” and “there is a triangle made from vertices 0, 1, and 2”. Triangulations are given in terms of vertex indices, not the vertices themselves. This is not only more compact, but sometimes different vertices appear in the same position, so using indices avoid any ambiguity.
For meshes created programmatically, you usually have an algorithm (possibly a formula) for these two things. Although you can certainly simply list the vertices and the triangulation directly, for meshes like that it is usually easier to simply use a modeling tool (that is what they are for). As we will see, the ability to simply list the two things we need can be a valuable debugging aid.
If you want a textured mesh, you also need to calculate UVs. UVs are normalized 2D coordinates that map into a texture. Each triangle in the mesh has a corresponding triangle in the texture. For example, if vertex 0, 1 and 2 have texture coordinates [0, 0], [1, 0], [0, 1], then the triangle 0, 1, 2 will be textured red if the texture looks like the blue and red square below.
Before looking at detailed calculations, let’s look at the general structure of a mesh generator in Unity. We will use this as the base for all our builders.
var mesh = new Mesh(); var vertices = CalculateVertices(); mesh.SetVertices(vertices); var triangles = CalculateTriangles(); mesh.SetTriangles(triangles, 0); var uvs = CalculateUvs(vertices); mesh.SetUVs(0, uvs); mesh.RecalculateBounds(); mesh.RecalculateNormals(); meshFilter.sharedMesh = mesh;
The full implementation has a lot of detail, and is shown at the end of the post. It is implemented as a component. You will need a MeshFilter component, and to be able to see the mesh, you will also need a MeshRenderer.
The basic operations are very simple though; we calculate what we need, then we put it together in a few lines.
The real work comes in to calculate the vertices and triangles. I do this work away from the computer (usually long before writing any code), as I find it’s easier to focus on the math and not deal with implementation complexities yet. Here are the basic steps:
- Assign an index to each vertex.
- Find an algorithm for calculating vertices. This is often a formula or set of formulas in terms of the vertex index.
- *Calculate how many vertices there are.
- Find an algorithm for calculating the vertex indices for each triangle. This too is often a formula or set of formulas in terms of the vertex index.
- *Calculate how many vertices there are.
- If necessary, calculate normals. Check that you have the same number as vertices. (This is not explained in this post).
- If necessary, calculate UVs. Check that you have the same number as vertices.
You have a lot of freedom in how you organize the data; the machine will not care. (Some schemes may be a bit slower, but if you need to optimize, a working reference implementation will be valuable in programming the more complicated optimized version). Therefore, always choose a system that will make the math easier. Often, the first triangle will have the first vertex, and the last triangle to have the last vertex.
Always choose a scheme with the simplest math.
The two steps marked with an * are checks to help avoid making certain errors (I make them very consistently if I skip these steps).
Finding formulas for shapes is of course a very broad problem, but I assume that you already understand your shape in some mathematical way (the reason why you are creating it dynamically in the first place). I will look at just one simple example and a variation in this post.
Example
Circular Sector
This mesh will be to approximate a circular sector in the XY plane with center at the origin and radius one (the shape between two radiuses and the circumference of a circle) which starts at angle 0, and ends at a given
angle, using
triangleCount triangles.
Assign vertex indices: There are two types of vertices; the center vertex, and perimeter vertices. There are four schemes that seem “natural”, depending on
- whether we make the first or last vertex the center, and
- whether we make the perimeter vertices go clockwise or anticlockwise.
For the first decision, it seems there is a slight advantage if we make the center vertex last (allowing our loop counter to match vertex indices). However, we will see that we often have some offset for a list of things, and the optimization suggested here does not generalize. Therefore, it pays to get used to that format; therefore we will put the center vertex first.
The last choice is easier to make, we follow the same order as Unity angles, so we go anti-clockwise.
Calculate the vertices: If the angle of the sector is
angle, then the inner angle of each triangle is
triangleAngle = angle / triangleCount. The center vertex is at the origin,
vertices[0] = [0, 0, 0], and the other vertices are given by
vertices[i + 1] = [cos(i * triangleAngle), sin(i * triangleAngle), 0], where
i goes from
0 to
triangleCount.
Note that this formula is simple because of specific choices in the problem:
- The circle is at the origin (so no positional offsets are necessary).
- We start at angle 0 (therefore no nasty angle offsets need to be calculated).
- The radius is 1 (therefore there is no scaling factor).
- We are in the XY plane (therefore we don’t have to do any tricky transformations or have to deduce a more difficult formula ourselves).
These choices were made on purpose. If we need a translated, scaled or rotated version, but do that in the Unity editor. There is no need to add error-prone or tricky math if it can be avoided.
Calculate the number of vertices: Indices go from
0 to
triangleCount + 1, so there are
triangleCount + 2 (there are
triangleCount + 1 perimeter vertices, and the center vertex).
Calculate the triangles: Each triangle is made from vertex
0,
i + 1 and
i + 2, where
i ranges from
0 to
triangleCount - 1. (Check: last triangle is made from
0,
triangleCount - 1 + 1, and
triangleCount - 1 + 2 equivalent to
0,
triangleCount,
triangleCount + 1, consistent with the fact that we have
triangleCount + 2 vertices, going from
0 to
n + 1.
Calculate the number of triangles: There are
triangleCount triangles by design, but we check whether our previous calculation is consistent with this: one rectangle for each index going from
0 to
triangleCount - 1, therefore
triangleCount triangles as required.
Calculate normals: In this case, Unity’s automatic calculation will do fine.
Calculate UVs: The UV range is exactly half our vertex range and offset by
[0.5, 0.5], so we get:
center = [0.5, 0.5]
uv[0] = center triangleAngle = alpha / n
uv[i + 1] = [cos(i * triangleAngle), sin(i * triangleAngle ))/2] + centerwhere
iranges from
0to
triangleCount.
You can grab the generic method from the full implementation given at the end of this section.
A full circle
We can use the code above for a full circle, but we may want to get rid of the last vertex that is the same as vertex 1. This type of thing happens frequently, so it is useful to take a look at how it is handled. In this case, we assume
angle = 2*pi, so
triangleCount is the only parameter.
Calculate the vertices: The vertices are exactly the same, except that since the last and vertex 1 are can be the same, we can simply throw out the last one.
Calculate the number of vertices: There are now
trianlgeCount + 1 vertices (one less than
trianlgeCount + 2 that we had before, since we threw out one).
Calculate the triangles: All triangles are the same, except for the last one, which is
0,
triangleCount,
1. Annoyingly this makes the for-loop in our code a bit more complex; more on this in the Implementation Tips section below.
Calculate the number of triangles: As before, there are
triangleCount triangles. We can check this in our loop too in our calculations.
Both UVs and normals stay the same, except the last one of each is dropped.
Notice that this is really a regular polygon with
triangleCount sides that looks like a circle when
triangleCount is sufficiently high.
Implementation Tips
Although the concepts are straightforward, it is easy to make mistakes. The tips in this section is to help avoid them, and what you can do if you run into bugs to solve them as fast as possible.
Use a clean test scene to test. This ensures nothing else interferes with your mesh builder while you are developing it.
Set the transform of your object to the identity state. That is, its position and rotation should be
[0, 0, 0], and the scale should be
[1, 1, 1]. This avoids those values from affecting your results while you are developing. Once you have it working, you do need to test it on objects with their transforms in non-default states. The transform should only make a difference if you used it in the calculations.
Start with no parent. Another thing that can sometimes interfere with the algorithm is parent transforms. During development, test with an object that has no parent. As with the previous paragraph, once you have it working, you do need to test whether you code still works with a parent with a transform not set to the identity. Again, it should not matter unless you use your objects global transform or any parent up the line in the calculations.
Always start with a simple quad mesh. I always start with a mesh that creates a square with four vertices and two triangles. This is because so many mesh generations bugs have nothing to do with the mesh math; instead, it is one of the following:
- You did not set the mesh of the mesh filter.
- The mesh is in the wrong place and off-camera.
- There is no renderer attached, and so the mesh is invisible.
- The mesh is too small or big.
- The material has not texture and an invisible color. (Not so often, because by default non-transparent materials are more common.)
- Because of our viewing angle, all planes are parallel to the viewing direction so we look at infinite thin things that are not visible.
- The triangle winding is wrong, so we are looking at invisible back-faces.
If you run into the last issue, and all of them are consistently wrong, simply swap any two. In other words, if this order does not work: vertex0, vertex1, vertex2, then this will: vertex0, vertex2, vertex1. (I really struggle to remember the correct order, so I always try it one way, making sure it is consistently clockwise or anticlockwise, and if it does not work I just swap two variables.)
Usually, the first thing you see when implementing a mesh is nothing, often for one of the reasons above. Before checking your calculations. It is useful to go down that list and eliminate all those causes, and a simple mesh will make this easier.
If your model can be asymmetric, don’t test it with symmetric parameters. For example, suppose you make make a mesh builder for arbitrary dimensions. Test it with a width and height that are clearly distinct, such as 1 and 5. It is common to swap symmetric variables by accident, and if you use the same value for both, you will not notice this kind error.
Print out the number of vertices and triangles after creating them. Once you put in your calculations, the most common error message is “Some indices are referencing out of bounds vertices.” To help avoid this bug, print out the vertex count and triangle count after you calculated the lists. See whether this matches your calculations in steps 3 and 5 above.
Keep for-loops simple. You may be tempted to write for loops that start at some offset (to match vertex indices for example), or increments by another amount (such as by three when calculating the triangles of your mesh). But it is better to keep your loops vanilla (start at 0, increment by 1, smaller than howEverManyYouNeed), and calculate array offsets accordingly. Here is why:
- Your code will be very consistent, and easier to follow if you have multiple mesh builders.
- Even for a single builder, it is easier to see what goes on if your loop uses a standard counter, and calculate vertex indices or array offsets accordingly.
- It is easy to get confused about what your loop index means, leading to mistakes during implementation.
For example, here is the loop to calculate perimeter vertices in our sector example:
triangleAngle = angle / triangleCount; for(int i = 0; i < triangleCount + 1; i++) { int vertexIndex = i + 1; vertices[vertexIndex] = new Vector3( Mathf.Cos(i * triangleAngle), Mathf.Sin(i * triangleAngle), 0); }
And here is an example of calculating the triangles, also for the sector.
for (int i = 0; i < triangleCount; i++) { triangles[3*i + 0] = 0; triangles[3*i + 1] = i + 2; triangles[3*i + 2] = i + 1; }
Deal with loops in the geometry by a simple test in the for loop. In the example of the circle where we share a vertex between the last and first triangles, our for-loop is a bit more complicated. This is common when the geometry features closed curves.
Here is how I usually write it, doing a special-case check inside the loop.
var triangles = new List<int>(); for (int i = 0; i < triangleCount; i++) { int index0 = 0; int index1 = i + 1; int index2 = i + 2; if(i == triangleCount - 1)//special case { index2 = 1; //second vertex of last triangle is vertex1 } triangles.Add(index0); triangles.Add(index1); triangles.Add(index2); }
Two alternatives come to mind: putting the special case outside the loop; or using modular arithmetic. I find doing the explicit test is the easiest to understand (it is easier to see directly how many triangles there are). And modular arithmetic is error-prone, especially if there are more types of rectangles packed for more complicated figures.
Draw small spheres at vertex positions to see whether your vertices are calculated correctly. If you still cannot get your mesh to work, a useful debugging aid is to draw a small sphere at each vertex position. This can provide you with a clue of what is going on. If the spheres are in the correct spot, then you know the issue is with the triangulation. In many situations, you can reduce the number of triangles so that they can become tractable by hand. In the circle, we can reduce the triangles to 3, for example. This allows us to see where the problem lies. (You will typically not have to calculate much, since if you can find one that is wrong, that may be enough to spot the bug.)
Unity’s gizmos are perfect for this, although there are some subtle issues to keep in mind. You will need a copy of the vertices (to avoid calculating them each frame), but you don’t want to serialize them with the scene (especially for large meshes), which means they will not render if you close and open the scene again without calling
UpdateMesh. This is probably OK as long as you don’t forget and get confused about why they are not rendering. In the code below, I use OnValidate to reset the cached vertices if necessary.
Use debug textures to help to diagnose UV bugs. When debugging UVs, it is useful to use special textures such as square or polar grids with coloring to help you figure out which parts of the image maps to which part of the mesh. I often try to find or make a texture that is specific to what I am trying to do. There are some nice ones here.
For 2D meshes, implement a re-usable standard UV generation function. For 2D meshes, it is common to use a standard UV scheme that maximizes the texture use, possible with some constraints such as preserving aspect ratio or mapping the origin to the center of the texture. An implementation is given inside the
MeshBuilder class below.
Make sure random data is used consistently by your algorithms. When using random data to generate your mesh, make sure that all your methods use the same random data, and that they don’t each generate their own. For example, if vertices are randomly generated, the UVs should use the same data, otherwise, there will be distortions that can fool you into thinking the UV code is wrong.
Use a context menu to allow testing in the editor without having to run the game. If you implement your mesh builder as a component, add a method that is called from the context menu or inspector button that you can use to test the builder in the editor.
Use components for simple builders, and a static method library if you have many builders. If you only have one or two mesh builders in your project, then designing your builders as components is probably the easiest way to go. If you have more, especially if they share some logic, you may want to consider a library of static functions instead.
For curves, calculate the number of triangles based on some accuracy metric to ensure visual consistency. For example, in our sector code above we specify the number of triangles directly; but it would be better to use the metric
trianglesPerRadian, and then calculate the number of triangles we need from the angle. This way, the curve of the sector will approximate a circular arc with the same fidelity regardless of the angle.
int
triangleCount = Mathf.CeilToInt(angle * trianglesPerRadian);
Full Implementation
Mesh Builder
using System; using System.Collections.Generic; using System.Linq; using UnityEngine; [RequireComponent(typeof(MeshFilter))] public class MeshBuilder: MonoBehaviour { private readonly static Color DebugSphereColor = new Color(1, 0.25f, 0); [Header("Debug Options")] [SerializeField] private bool drawDebugSpheres = false; [SerializeField] private bool printDebugInfo = false; [SerializeField] private float debugSphereRadius = 0.1f; //Use property instead to ensure initialization private MeshFilter meshFilter; //We keep this as a variable so we can draw debug Info private List<Vector3> vertices; protected MeshFilter MeshFilter { get { if (meshFilter == null) { meshFilter = GetComponent<MeshFilter>(); //cannot be null since MeshFilter is required. } return meshFilter; } } public void UpdateMesh() { DestroyOldMesh(); Preprocess(); var mesh = new Mesh(); vertices = CalculateVertices(); DebugLog("vertices", vertices.Count); mesh.SetVertices(vertices); var triangles = CalculateTriangles(); DebugLog("triangles", triangles.Count); mesh.SetTriangles(triangles, 0); var uvs = CalculateUvs(vertices); if (uvs == null) { //can be null if subclass does not support texturing DebugLog("uvs", null); } else { DebugLog("uvs", uvs.Count); mesh.SetUVs(0, uvs); } var normals = CalculateNormals(); if (normals == null) { //can be null if subclass does override default normals DebugLog("normals", null); mesh.RecalculateNormals(); } else { DebugLog("normals", normals.Count); mesh.SetNormals(normals); } mesh.RecalculateBounds(); meshFilter.sharedMesh = mesh; } public void OnDrawGizmos() { if (!drawDebugSpheres) return; if (vertices == null) return; //can happen if no mesh was ever created. Gizmos.color = DebugSphereColor; foreach(var vertex in vertices) { var spherePosition = transform.TransformPoint(vertex); float radius = transform.lossyScale.magnitude * debugSphereRadius; Gizmos.DrawWireSphere(spherePosition, radius); } } virtual protected List<Vector3> CalculateVertices() { return new List<Vector3> { new Vector3(-1, -1, 0), new Vector3(1, -1, 0), new Vector3(1, 1, 0), new Vector3(-1, 1, 0), }; } virtual protected List<Vector2> CalculateUvs(List<Vector3> vertices) { return GetStandardUvs(vertices, true, true); } virtual protected void Preprocess() { } virtual protected List<int> CalculateTriangles() { return new List<int> { 0, 3, 1, 1, 3, 2 }; } virtual protected List<Vector3> CalculateNormals() { return null; } [ContextMenu("Update Mesh")] protected void UpdateMeshTest() { UpdateMesh(); } private void DestroyOldMesh() { if (MeshFilter.sharedMesh != null) { if (Application.isPlaying) { Destroy(MeshFilter.sharedMesh); //prevents memory leak } else { DestroyImmediate(MeshFilter.sharedMesh); } } } //Assumes a set of vertices in the XY plane protected List<Vector2> GetStandardUvs( List<Vector3> vertices, bool preserveAspectRatio, bool mapOriginToCenter ) { var boundingBox = GetBoundingBoxXY(vertices, mapOriginToCenter); var map = GetStandardUvMap(boundingBox, preserveAspectRatio, mapOriginToCenter); return vertices.Select(map).ToList(); } private Rect GetBoundingBoxXY(List<Vector3> vertices, bool mapOriginToCenter) { var anchor = vertices[0]; var extent = vertices[0]; foreach (var vertex in vertices.Skip(1)) { if (vertex.x < anchor.x) { anchor.x = vertex.x; } else if (vertex.x > extent.x) { extent.x = vertex.x; } if (vertex.y < anchor.y) { anchor.y = vertex.y; } else if (vertex.y > extent.y) { extent.y = vertex.y; } } if (mapOriginToCenter) { anchor.x = Mathf.Min(anchor.x, -extent.x); anchor.y = Mathf.Min(anchor.y, -extent.y); extent.x = Mathf.Max(extent.x, -anchor.x); extent.y = Mathf.Max(extent.y, -anchor.y); } var size = extent - anchor; return new Rect(anchor, size); } private Func<Vector3, Vector2> GetStandardUvMap( Rect boundingBox, bool preserveAspectRatio, bool mapOriginToCenter) { Vector2 anchor = boundingBox.position; Vector2 size = boundingBox.size; if (preserveAspectRatio) { if(size.x < size.y) { size = new Vector3(size.y, size.y, 0); } else { size = new Vector3(size.x, size.x, 0); } } if (mapOriginToCenter) { return v => new Vector2(v.x / size.x + 0.5f, v.y / size.y + 0.5f); } else { return v => new Vector2((v.x - anchor.x) / size.x, (v.y - anchor.y) / size.y); } } private void DebugLog(string label, object message) { if (!printDebugInfo) return; if (message == null) { DebugLog(label, "null"); } else { Debug.Log(label + ": " + message, this); } } private void GetVerticesFromMesh() { if (MeshFilter.sharedMesh != null) { vertices = MeshFilter.sharedMesh.vertices.ToList(); DebugLog("vertices", "refreshed from mesh"); } } private void OnValidate() { if(vertices == null) { GetVerticesFromMesh(); } } }
Circular Sector Mesh Builder
using UnityEngine; using System.Collections.Generic; public class CircularSectorMeshBuilder : MeshBuilder { [Header("Mesh Options")] [Min(0)] [SerializeField] private int trianglesPerRad = 5; [SerializeField] [Range(0f, 360f)] private float angleInDegrees = 270f; override protected List<Vector3> CalculateVertices() { var triangleCount = GetTriangleCount(); float sectorAngle = Mathf.Deg2Rad * angleInDegrees; var vertices = new List<Vector3>(); vertices.Add(Vector2.zero); for (int i = 0; i < triangleCount + 1; i++) { float theta = i / (float)triangleCount * sectorAngle;++) { triangles.Add(0); triangles.Add(i + 2); triangles.Add(i + 1); } return triangles; } private int GetTriangleCount() { return Mathf.CeilToInt(2 * Mathf.PI * trianglesPerRad); } }
Circle Mesh Builder
using System.Collections.Generic; using UnityEngine; public class CircleMeshBuilder : MeshBuilder { [Header("Mesh Options")] [SerializeField] [Min(0)] private int trianglesPerRad = 5; override protected List<Vector3> CalculateVertices() { var triangleCount = GetTriangleCount(); var vertices = new List<Vector3>(); vertices.Add(Vector2.zero); for (int i = 0; i < triangleCount; i++) { float theta = i / (float)triangleCount * 2 * Mathf.PI;++) { int index0 = 0; int index1 = i + 1; int index2 = i + 2; if(i == triangleCount - 1) { index2 = 1; //second vertex of last triangle is vertex1 } triangles.Add(index0); triangles.Add(index2); triangles.Add(index1); } return triangles; } private int GetTriangleCount() { return Mathf.CeilToInt(2 * Mathf.PI * trianglesPerRad); } } | https://www.code-spot.co.za/2020/11/04/generating-meshes-procedurally-in-unity/ | CC-MAIN-2021-04 | refinedweb | 3,892 | 54.63 |
thailand motorcycle parts, rokcer arm for150cc engine on alib...
US $0.5-1
50 Pieces (Min. Order)
OEM Thailand Motorcycle Parts Of Brake Shoe
US $0.6-1
5000 Sets (Min. Order)
Mesh motorcycle seat cover thailand motorcycle parts
US $1-5
1000 Pieces (Min. Order)
Direct Manufacturer Motorcycle Spare Parts Thailand Tire and ...
US $4-8
1000 Pieces (Min. Order)
High quality brakes motorcycle spare parts thailand/china ...
US $4-6
1000 Sets (Min. Order)
motorcycle spare parts thailand
US $0.1-10
100 Perches (Min. Order)
Original Genuine Thailand Motorcycle Parts
US $1-5
300 Pieces (Min. Order)
motorcycle spare parts thailand/motorcycles alloy spare ...
US $1-10
1 Piece (Min. Order)
Grade A motorcycle spare parts thailand,clutch plate manufact...
US $0.13-0.2
1000 Pieces (Min. Order)
motorcycle spare parts thailand
US $2-3
50 Pieces (Min. Order)
thailand motorcycle parts/ motorcycle parts clutch/ parts for...
US $1-300
10 Pieces (Min. Order)
Thailand Motorcycle Parts
US $1-10
1 Set (Min. Order)
thailand motorcycle parts
US $4-25
500 Sets (Min. Order)
spare parts taiwan motorcycle parts thailand motorcycle ...
US $0.36-1.48
100 Sets (Min. Order)
wholesale cheap tyres 3.00-18 motorcycle spare parts ...
US $4.3-8.5
500 Pieces (Min. Order)
CD70 engine drum assy gear shift, thailand motorcycle parts m...
US $0.5-1
50 Pieces (Min. Order)
SCL-2013030171 CGL125/WY125/GL125 motorcycle spare parts ...
US $2.93-4.09
200 Sets (Min. Order)
motorcycle spare parts thailand
US $10-50
1000 Pieces (Min. Order)
top sale motorcycle spare parts thailand
US $0.01-6
1 Piece (Min. Order)
Durable 100% Raw Material Motorcycle Spare Parts Thailand
US $3-6
300 Kilograms (Min. Order)
alibaba express motorcycle parts manufacture 2.75-18 ...
US $9.2-11.2
500 Pieces (Min. Order)
New Product Motorcycle, Spare Parts, for Cg125, Thailand ...
US $4.2-5
100 Pieces (Min. Order)
thailand motorcycle parts
US $0.32-2.3
200 Sets (Min. Order)
motorcycle tyre 80/80-17 thailand motorcycle parts
US $4-7
500 Pieces (Min. Order)
alibaba express motorcycle parts manufacture 2.75-18 ...
US $7.6-10.3
100 Pieces (Min. Order)
250-18 motorcycle inner tube motorcycle tyre tube price ...
US $0.7-1.0
1000 Pieces (Min. Order)
Motorcycle spare parts thailand made of aluminum with anodize...
US $1-99
1 Piece (Min. Order)
Thailand motorcycle parts
US $0.5-5
2000 Pieces (Min. Order)
tires 90/90-19 thailand motorcycle parts
US $4-23
1500 Pieces (Min. Order)
motorcycle fuel cock ,thailand motorcycle parts
US $0.49-4.99
100 Sets (Min. Order)
chinese motorcycle spare parts thailand good price and high q...
US $1.1-1.3
1000 Sets (Min. Order)
high performance 110cc racing Thailand market motorcycle exhu...
US $13-50
50 Pieces (Min. Order)
Manufacturer motorcycle modified parts, motorcycle spare ...
US $5-10
500 Pieces (Min. Order)
import motorcycle spare parts thailand manufacturers in china
US $5.40
100 Pieces (Min. Order)
made in china motorcycle parts thailand
US $0.01-8
100 Pieces (Min. Order)
250-18 motorcycle inner tube motorcycle tyre tube price ...
US $1.3-1.4
1000 Pieces (Min. Order)
Chinese motorcycle spare parts for motorcycle
US $30-50
400 Sets (Min. Order)
Carbon fiber motorcycle parts for Kawasaki | http://www.alibaba.com/showroom/motorcycle-parts-thailand.html | CC-MAIN-2016-07 | refinedweb | 551 | 64.27 |
kig
#include <curve_imp.h>
Detailed Description
This class represents a curve: something which is composed of points, like a line, a circle, a locus.
Definition at line 27 of file curve_imp.h.
Member Typedef Documentation
Definition at line 39 of file curve_imp.h.
Member Function DocumentationImp that are interpreted as relative displacement (x and y)
Definition at line 44 of file curve_imp.cc.
Definition at line 270 of file curve_imp.cc.
Return whether this Curve contains the given point.
This is implemented as a numerical approximation. Implementations can/should use the value test_threshold in common.h as a threshold value.
Implemented in ArcImp, LineImp, ConicArcImp, RayImp, VectorImp, RationalBezierImp, ConicImp, SegmentImp, LocusImp, CubicImp, and BezierImp.
Returns a copy of this ObjectImp.
The copy is an exact copy. Changes to the copy don't affect the original.
Implemented in LineImp, ConicArcImp, ArcImp, RayImp, ConicImpPolar, ConicImpCart, VectorImp, RationalBezierImp, SegmentImp, LocusImp, CubicImp, BezierImp, and CircleImp.
This function returns the distance between the point with parameter param and point p.
param is allowed to not be between 0 and 1, in which case we consider only the decimal part.
Definition at line 137 of file curve_imp.cc.
Reimplemented in LineImp, ArcImp, ConicArcImp, RayImp, VectorImp, SegmentImp, ConicImp, CubicImp, and CircleImp.
Definition at line 149 of file curve_imp.cc.
This function calculates the parameter of the point that realizes the minimum in [a,b] of the distance between the points of the locus and the point of coordinate p, using the golden ration method.
Definition at line 58 of file curve_imp.cc.
Implemented in ArcImp, LineImp, ConicArcImp, RayImp, RationalBezierImp, VectorImp, SegmentImp, LocusImp, ConicImp, BezierImp, CubicImp, and CircleImp.
Returns the ObjectImpType representing the CurveImp type.
Definition at line 27 of file curve_imp. | https://api.kde.org/4.14-api/kdeedu-apidocs/kig/html/classCurveImp.html | CC-MAIN-2020-05 | refinedweb | 284 | 51.95 |
Empirical Approximation overview¶
For most models we use sampling MCMC algorithms like Metropolis or NUTS. In pymc3 we got used to store traces of MC samples and then do analysis using them. As new VI interface was implememted it needed a lot of approximation types.
One of them was so-called Empirical. This type of approximation stores
particles for SVGD sampler. But there is no difference between
independent SVGD particles and MCMC trace. So the idea was pretty simple
to understand and realize: make Empirical be a bridge between MCMC
sampling output and full-fledged VI utils like
apply_replacements or
sample_node. For the interface description, see
variational_api_quickstart.
Here I will just focus on Emprical and give an overview of specific
things for Empirical approximation
In [1]:
%matplotlib inline import matplotlib.pyplot as plt import theano import numpy as np import pymc3 as pm np.random.seed(42) pm.set_tt_rng(42)
Multimodal density¶
Let’s recall the problem from variational_api_quickstart where we first got a NUTS trace
In [2]:
w = pm.floatX([.2, .8]) mu = pm.floatX([-.3, .5]) sd = pm.floatX([.1, .1]) with pm.Model() as model: x = pm.NormalMixture('x', w=w, mu=mu, sd=sd, dtype=theano.config.floatX) trace = pm.sample(50000)
WARNING (theano.gof.compilelock): Overriding existing lock by dead process '26463' (I am process '27017') Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... 100%|██████████| 50500/50500 [00:25<00:00, 2015.98it/s]
In [3]:
pm.traceplot(trace);
Great. First having a trace we can create
Empirical approx
In [4]:
print(pm.Empirical.__doc__)
Single Group Full Rank Approximation
In [5]:
with model: approx = pm.Empirical(trace)
In [6]:
approx
Out[6]:
<pymc3.variational.approximations.Empirical at 0x1193a41d0>
This type of approximation has it’s own underlying storage for samples
that is
theano.shared itself
In [7]:
approx.histogram
Out[7]:
histogram
In [8]:
approx.histogram.get_value()[:10]
Out[8]:
array([[-0.28495539], [-0.27002112], [-0.27578667], [-0.36169251], [-0.43738686], [-0.479884 ], [-0.3633637 ], [-0.31469654], [-0.36427262], [-0.36479427]])
In [9]:
approx.histogram.get_value().shape
Out[9]:
(50000, 1)
It has exactly the same number of samples that you had in trace before.
In our particular case it is 50k. Another thing to notice is thet if you
have multitrace with more than one chain you’ll get much more
samples stored at once. We flatten all the trace for creating
Empirical.
This histogram is about how we store samples. The structure is
pretty simple:
(n_samples, n_dim) The order of these variables is
stored internally in the class and in most cases will not be needed for
end user
In [10]:
approx.ordering
Out[10]:
<pymc3.blocking.ArrayOrdering at 0x1190be390>
Sampling from posterior is done uniformly with replacements. Call
approx.sample(1000) and you’ll get again the trace but the order is
not determined. There is no way now to reconstruct the underlying trace
again with
approx.sample.
In [11]:
new_trace = approx.sample(50000)
In [12]:
%timeit new_trace = approx.sample(50000)
1.54 s ± 24.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
After sampling function is compiled sampling bacomes really fast
In [13]:
pm.traceplot(new_trace);
You see there is no order any more but reconstructed density is the same.
2d density¶
In [14]:
mu = pm.floatX([0., 0.]) cov = pm.floatX([[1, .5], [.5, 1.]]) with pm.Model() as model: pm.MvNormal('x', mu=mu, cov=cov, shape=2) trace = pm.sample(1000)
Auto-assigning NUTS sampler... Initializing NUTS using jitter+adapt_diag... 100%|██████████| 1500/1500 [00:02<00:00, 677.32it/s]
In [15]:
with model: approx = pm.Empirical(trace)
In [16]:
pm.traceplot(approx.sample(10000))
Out[16]:
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x11839eac8>, <matplotlib.axes._subplots.AxesSubplot object at 0x118631160>]], dtype=object)
In [17]:
import seaborn as sns
In [18]:
sns.kdeplot(approx.sample(1000)['x'])
Out[18]:
<matplotlib.axes._subplots.AxesSubplot at 0x118a29240>
Previously we had a
trace_cov function
In [19]:
with model: print(pm.trace_cov(trace))
[[ 1.03635135 0.56102585] [ 0.56102585 1.03247922]]
Now we can estimate the same covariance using
Empirical
In [20]:
print(approx.cov)
Elemwise{true_div,no_inplace}.0
That’s a tensor itself
In [21]:
print(approx.cov.eval())
[[ 1.035315 0.56046482] [ 0.56046482 1.03144674]]
Estimations are very close and differ due to precision error. We can get the mean in the same way
In [22]:
print(approx.mean.eval())
[ 0.03685694 -0.01467962] | https://docs.pymc.io/notebooks/empirical-approx-overview.html | CC-MAIN-2018-47 | refinedweb | 743 | 53.58 |
07-31-2015
04:08 AM
- edited
07-31-2015
04:12 AM
Hello,
I'm developing a program with a form. This worked perfectly while debugging (calls active SE-Application, does things, closes, no crashes). I've tried building my solution and implement it in SE (with and without any registration, Form application and Class Library, all references for SE are there). I have yet to implement my macro, because SE kept crashing after I added it to the ribbon. Conclusion: I have no idea how to properly implement it into SE.
My question however, after trying a lot of different approaches to build my program, suddenly everything in the namespaces (SolidEdgeAssembly, SolidEdgeFramework) is ambigious. I have added the references to both of them, they're being imported on the top, but it returns an error:
'AssemblyDocument' is ambiguous in the namespace 'SolidEdgeAssembly'
It's like SolidEdgeAssembly and SolidEdgeFramework aren't declared or something..
Who can help me?
Edit 1: After "SolidEdgeAssembly. " my suggestions are Splice and Splices.....
Solved!
Go to Solution.
07-31-2015
05:53 AM
Not sure how, but I got it fixed.
Literally no idea how... | https://community.plm.automation.siemens.com/t5/Solid-Edge-Developer-Forum/Debugging-worked-tried-building-solution-now-all-SE-namespaces/td-p/309324 | CC-MAIN-2017-51 | refinedweb | 191 | 57.77 |
Extend Qlable Widget and add it to the QDesigner Widgetlist
Hallo, I'm trying to subclass QLable and add some functionality to the new class. This works well, but now I want my new Widget to show up in the QT Designer. Im using QT5 and read some Tutorials for creating Plugins, but I didn't understand all off that. is there a simple solution for my problem.
Thank you very much for your help
[quote author="M001" date="1375813969"]Im using QT5 and read some Tutorials for creating Plugins, but I didn’t understand all off that. is there a simple solution for my problem.[/quote]
:)
What have you done so far? What is exactly not working? Did you read "this": tutorial? It should be actually really straight forward...
But there are some pitfalls when creating plugins which lead to fail "silently": when they are loaded.
But this is all guessing, since you were very vague with the description about your problem...
hey :), I also tryed the "Custom Widget Plugin Example" that you mentioned. I copied all the Files from the Website to my computer and open them with QT5.1 and press compile (release). I get the customwidgetplugin.dll and copied it to the plugin directory C:\Qt\Qt5.1.0\5.1.0\mingw48_32\plugins\designer. Then I restart QT, but the Plugin wasn't added, and when I open *Tools>Formulareditor>about QT Designer Plugins * it's also not there. I thougt if i copy a codeexample from qt-project website it should work :), but it dosend. So I think there where no missspelling and theses kinds of errors. There must be another error.
I am using:
Qt Creator 2.7.2
Based on Qt 5.1.0 (32 bit)
Created on Jul 2 2013 um 20:48:03
Revision ea5aa79dca
on a 64bit Windows 7 installation
did you look into the second link in my post?
Qt plugin system has a version check built in.
My guess: Since you built your plugin with Qt 5.1 it can only be loaded by applications also built with Qt 5.1 (or higher).
But if the application is built with Qt 5.1.x and the plugin with Qt 5.0.x it should work AFAIK. (backward compatibility)
I compiled the plugin with my QT5.1 installation and want to uses the plugin in the same QT5.1 installation. I thougt this should be working.
QtDesigner needs to be compiled with the same Qt version number (or higher). If it is, there must be another problem.
okay thanks :), I have checkt the Versions of Creator and designer and they all are based on QT5.1. So when I checkt the Version number ob QTDesigner I opend it from the Startmenu and not via QTCreator and the analogclock plugin is in the Widgetbox and I could use ist in my main window :D. But here is my next question, are the QTDesigner opend from QTCreator and the QTDesigner opend from Startmenue two different Programms, with two different plugin directorys or why I can't find my plugin in one of them but not in the other? Tank you very much for your Help raven-worx :)
There are two differnt folders. I copied it in the folder for the creator C:\Qt\Qt5.1.0\Tools\QtCreator\bin\plugins\designer. And now the plugin is showing up, but there is an error message:
... .dll coulnd't be load. The procedure could not be found
why do I get this error in Creator, when I don't get this error with the same File in QTDesigner?
I think you are hit by the problem described "here":
i downloaded "Qt 5.1.0 for Windows 32-bit (MinGW 4.8, OpenGL, 666 MB)" it is compiled with MinGW 4.8. Thats the version I currently use, it was installed by the setup. I'm using:
Qt Creator 2.7.2
Based on Qt 5.1.0 (32 bit)
Created on Jul 2 2013 um 20:48:03
Revision ea5aa79dca
I saw that there is a new QTCreator Version avaliabel "Qt Creator 2.8.0 for Windows (52 MB) ". But how do i find the information with wich version of MinGW it is compiled?
so, i have installed "qt-windows-opensource-5.0.2-mingw47_32-x86-offline.exe" and copied the Plugin created with “Qt 5.1.0 for Windows 32-bit (MinGW 4.8, OpenGL, 666 MB)” into the plugin folder of QT5.0.2 QT Creator. Now I get anothe error message when I open the "About QT Creator Plugins" Dialog: ... .dll uses an incompatible QT-libary. (5.1.0) [release]. I found a link where this problem is described: "": . I can't use the Plugin created with 5.1 in 5.0 thats okay, but using the pluigin created with 5.1 in QT5.1 should be okay, but it doesn't work. I don't know what's going on here :(
From the link I posted, QtCreator from the SDK is built with MSVC so you need to recompile QtCreator to use your plugin or use the MSVC build (then you would need to install Visual Studio)
another step forward :), Thanks SGaist!
I downloaded "Qt 5.1.0 for Windows 32-bit (VS 2012, 511 MB)" compiled the Plugin with this Version and copied the .dll to the "Qt 5.1.0 for Windows 32-bit (MinGW 4.8, OpenGL, 666 MB)" installation Plugin folder and now I could use this Plugin in my "Qt 5.1.0 for Windows 32-bit (MinGW 4.8, OpenGL, 666 MB)" installation. That means i see ist in QTCreator-Designer and could move it on my mainwindow. But when i try to compile my project I get these errors:
C:\Users\name\build-TestPlugIns-Desktop_Qt_5_1_0_MinGW_32bit-Debug\ui_mainwindow.h:18: Fehler:analogclock.h: No such file or directory
the headerfile is missing. I thougt this information is all built in the .dll file?
Edit: I am using MS Visual Studio 2012
The header is needed to write your code like for any other library
sorry for this stupid question, but what should i write into the headerfile?
It's the header file of your widget from the plugin
[quote author="M001" date="1375882194"]I thougt this information is all built in the .dll file?[/quote]
It is but only the implementation. To use it you need the header file just for the compiler to know which interface to use. The linker then looks for the implementation either in your binary or like in your case in the plugin (dependency).
okay i include the headerfile and get many errors:
...ui_widget.h:56: Fehler:undefined reference to `_imp___ZN11AnalogClockC1EP7QWidget'...moc_analogclock.cpp:70:
...moc_analogclock.cpp:70: Fehler:undefined reference to `_imp___ZN11AnalogClock16staticMetaObjectE'
...moc_analogclock.cpp:65: Fehler:undefined reference to `_imp___ZN11AnalogClock16staticMetaObjectE'
a little bit more information:
the first error is caused by here:
file: ui_widget.h
@
....
class Ui_Widget
{
public:
AnalogClock *analogClock;
void setupUi(QWidget *Widget) { if (Widget->objectName().isEmpty()) Widget->setObjectName(QStringLiteral("Widget")); Widget->resize(400, 300); analogClock = new AnalogClock(Widget); //this line produces the error analogClock->setObjectName(QStringLiteral("analogClock")); analogClock->setGeometry(QRect(60, 60, 199, 151)); retranslateUi(Widget); QMetaObject::connectSlotsByName(Widget); } // setupUi void retranslateUi(QWidget *Widget) { Widget->setWindowTitle(QApplication::translate("Widget", "Widget", 0));
#ifndef QT_NO_TOOLTIP
analogClock->setToolTip(QApplication::translate("Widget", "The current time", 0));
#endif // QT_NO_TOOLTIP
#ifndef QT_NO_WHATSTHIS
analogClock->setWhatsThis(QApplication::translate("Widget", "The analog clock widget displays the current time.", 0));
#endif // QT_NO_WHATSTHIS
} // retranslateUi
};
....
@
second error:
file: moc_analogclock.cpp
@
....
const QMetaObject *AnalogClock::metaObject() const
{
return QObject::d_ptr->metaObject ? QObject::d_ptr->dynamicMetaObject() : &staticMetaObject;
}
....@
third
file: --moc_analogclock.cpp
@...
const QMetaObject AnalogClock::staticMetaObject = {
{ &QWidget::staticMetaObject, qt_meta_stringdata_AnalogClock.data,
qt_meta_data_AnalogClock, qt_static_metacall, 0, 0}
};--
....
@
if I add analogclock.h and analogclock.cpp files the code compiles and I could use my plugin. but i thougt the analogclock.cpp should be inside the .dll file. Why do I have to add the analogclock.cpp file?
did you progress any further on this topic i am stuck at the same spot as you were then please help if you found a solution
- NicuPopescu
how this stuffs work:
custom plugin dll in i.e. ..\qtcreator-2.8.0\bin\plugins\designer\ is loaded by the QtCreator and it needs no header file or other information
the plugin's interface provides an includeFile() by which the designer knows what header has to include in ui header file generated for your project
what follows is just an exemplification:
copy customwidgetplugin.dll and customwidgetplugin.lib to i.e ...\Qt\4.8.5\plugins\designer\
add the header file of your custom widget i.e. analogclock.h in your application dir in order to be found at compilation from ui header file
-in pro file add the export library file .lib,
i.e.
LIBS += $$(QTDIR)\plugins\designer\customwidgetplugin.lib
I have tested and works
Cheers!
but mine is still not working i have added the library but whenever i run after adding the library the application starts and terminates unexpectedly without showing any error
- NicuPopescu
hehe ... you mix debug and release app and plugin dll, or conversely
try to compile both for release or debug
i have only used release files do i need to use debug files as well somewhere | https://forum.qt.io/topic/30376/extend-qlable-widget-and-add-it-to-the-qdesigner-widgetlist | CC-MAIN-2018-05 | refinedweb | 1,542 | 67.55 |
I currently have a code written that recursively takes the letters within two strings and returns the new word with alternating letters. I would like to optimize this code so that if the first or second word was longer it will still return the remaining letters in the longer string.
def alt(s,t):
if len(s) != len(t):
return
elif s == '' and t == '':
return ''
else:
return s[0] + t[0] + alt(s[1:], t[1:])
>>> alt('hello','bye')
'hbeylelo'
Just test for
s and
t are empty and return the other value if one of them is:
def alt(s, t): if not s: return t elif not t: return s else: return s[0] + t[0] + alt(s[1:], t[1:])
Even if both
s and
t are empty, the empty string is returned, which is a perfectly valid end-state.
You could shorten this to:
def alt(s, t): if not (s and t): return s + t return s[0] + t[0] + alt(s[1:], t[1:])
so the end-state is reached whenever either
s or
t is empty (or they both are).
This produces your desired output:
>>> alt('hello', 'bye') 'hbeylelo'
An iterative version would be:
from itertools import chain try: # Python 2 from itertools import izip_longest as zip_longest except ImportError: # Python 3 from itertools import zip_longest def alt_iterative(s, t): return ''.join(chain.from_iterable(zip_longest(s, t, fillvalue='')))
This uses the
itertools.zip_longest() function to do most of the work. | https://codedump.io/share/B7vpGdCAAgWQ/1/return-alternating-letters-in-two-strings-of-different-lengths | CC-MAIN-2017-13 | refinedweb | 246 | 61.19 |
This is part three of a series of articles about packaging python games. Part one, Part two. More discussion is happening on the pygame mailing list.
TLDR; I think we should follow the python conventions and fix the problems they cause.
Why this is good? Because you can start writing your code without first having to decide the name. It's a small thing, but in the context of game competitions it's more important. I'm not sure if it's really worth keeping that idea though.
Having data/ separate is the choice of the sampleproject, and the skellington.
Perhaps we could work on adding a find_data_files() type function for setuptools, which would recursively add data files from data/. We could include it in our 'skellington' package until such a thing is more widely installed by setuptools.
TLDR; I think we should follow the python conventions and fix the problems they cause.
src, gamelib, and mygamepackage
1). One thing that is different in the skellington layout from the sampleproject one is that the naming is a bit more specific for where the code goes in skellington.
Skellington layout:
Skellington layout:
gamelib/data/
Sampleproject layout after doing "skellington create mygamepackage"
mygamepackage/data/
(Where skellington is the name of our tool. It could be pygame create... or whatever)
The benefits of this are that you can go into the repo and do:
import mygamepackage
Any it works. Because mygamepackage is just a normal package.
You can also do:
python mygamepackage/run.py.
My vote would be to follow the python sampleproject way of doing things.
Data folder, and "get_data" "mygamepackage" folder.
Having data/ separate is the choice of the sampleproject, and the skellington.. are a giant pain. One, because they require mentioning every single file, rather than just the whole folder. Two, because they require you updating configuration in both MANIFEST.in and setup.py. Also, there are different files included depending on which python packaging option you use.
See the packaging.python.org documentation on including data files:.
I haven't confirmed if pkg_resources works with the various .exe making tools. I've always just used file paths. Thomas, does it work with Pynsist?
Having game resources inside a .zip (or .egg) makes loading a lot faster on many machines. pkg_resources supports files inside of .egg files. So, I think we should consider this possibility.
Having game resources inside a .zip (or .egg) makes loading a lot faster on many machines. pkg_resources supports files inside of .egg files. So, I think we should consider this possibility..
Perhaps we could work on adding a find_data_files() type function for setuptools, which would recursively add data files from data/. We could include it in our 'skellington' package until such a thing is more widely installed by setuptools.
Despite all the issues of having a separate data/ folder, it is the convention so far. So my vote is to follow that convention and try fixing the issues in setuptools/pkg_resources.
Too many files, too complex for newbies..
We can help people asking the question "where does my game code live?", "where do my image files go?" by putting it right up the top of the readme.
We can also help by using dotfiles ".file" so they are hidden. And also using .gitignore and such. We can also try to keep as many 'packaging' related files in a 'dist' folder. Even better would be to put things in our 'skellington' package, in setuptools, or upstream wherever possible.
It used to be convention to have a 'dist' folder which would contain various distribution and packaging scripts. (It's where distutils puts things too). I'm not sure putting scripts in there is a good idea.
It used to be convention to have a 'dist' folder which would contain various distribution and packaging scripts. (It's where distutils puts things too). I'm not sure putting scripts in there is a good idea.
The other problem with having a million config files is that the question "where do I change the app description?".
My vote would be to add simple instructions to the top of the readme, to work on fixing things upstream as much as possible, and to be very mindful about adding extra config files or scripts, moving much config out of the repo as is possible. | http://renesd.blogspot.com/2017/02/where-code-data-things-are-part-3.html | CC-MAIN-2018-43 | refinedweb | 722 | 77.03 |
Real-time Web Application with Websockets and Vert.x.
What are Websockets?
WebSocket is asynchronous, bidirectional, full-duplex protocol that provides a communication channel over a single TCP connection. With the WebSocket API it provides bidirectional communication between the website and a remote server. Originally, WebSocket was supposed to be a part of the HTML 5 specification, but a later revision of the protocol is described in a separate document RFC 6455.
WebSockets solve many problems which prevented the HTTP protocol from being suitable for use in modern, real-time applications. Workarounds like polling are no longer needed, which simplifies application architecture. WebSockets do not need to open multiple HTTP connections, they provide a reduction of unnecessary network traffic and reduce latency.
Each WebSockets connection begins as an HTTP request. In addition, an updated HTTP header indicates that the client wants to change the connection to WebSocket protocol. The initial HTTP connection is replaced by a WebSocket connection using the same underlying TCP/IP connection. At this point, each side can start sending data.
WebSockets are supported by most web browsers (source):
Some examples of good use cases for WebSockets include:
- chat applications
- multiplayer games
- social feeds
- collaborative editing or coding
- sports updates
Websocket API vs SockJS
Unfortunately, WebSockets are not supported by all web browsers. However, there are libraries that provide a fallback when WebSockets are not available. One such library is SockJS. SockJS starts from trying to use the WebSocket protocol. However, if this is not possible, it uses a variety of browser-specific transport protocols. SockJS is a library designed to work in all modern browsers and in environments that do not support WebSocket protocol, for instance behind restrictive corporate proxy. SockJS provides an API similar to the standard WebSocket API. A simple example of using the SockJS library might look like the one below. First, load SockJS library:
<script src="//cdn.jsdelivr.net/sockjs/0.3.4/sockjs.min.js"></script>
Then we establish the connection to the SockJS server:
var sock = new SockJS(''); sock.onopen = function() { console.log('open'); }; sock.onmessage = function(e) { console.log('message', e.data); }; sock.onclose = function() { console.log('close'); }; sock.send('test'); sock.close();
Vert.x
SockJS client requires the server-side part. For the Java language we can use, among other things, Spring Framework Java client & server, Atmosphere Framework or Vert.x. We are going to use the latter.
Vert.x is a polyglot, non-blocking, event-driven tool-kit for building applications on the JVM. Vert.x is pretty fast, which you can see on TechEmpower Benchmarks. The packages of code that Vert.x executes are called verticles. Verticles can be written in Java, Groovy, Ruby, JavaScript as well as in several programming languages mixed and matched in a single application. Many verticles can be executed concurrently in the same Vert.x instance. A single Vert.x instance runs inside its own JVM instance. Vert.x guarantees that a particular verticle instance is never executed by multiple threads concurrently. Verticles communicate by passing messages using an event bus.
Vert.x applications are mostly written by defining event handlers. Vert.x calls handlers using a thread called an event loop. The event loop delivers events to different handlers in succession as they arrive. None of the Vert.x APIs block threads, so you also need to remember not to block the event loop in handlers. Because nothing blocks, an event loop can potentially deliver a lot of events in a short time. We make guarantees that any specific handler will always be invoked by the same event loop. This means you can write your code as single threaded. Vert.x instance maintains several event loops. The default number of event loops is determined by the number of available cores on the machine.
There are two main types of verticles: standard and worker verticles. Standard verticles are always executed using an event loop thread.
Workers are designed for executing blocking code. Workers are like standard verticles but use threads from a special worker thread pool.
An alternative way to run blocking code is to use
executeBlocking method
directly from an event loop.
Typical application will consist of multiple verticles running on Vert.x instance:
There can be many Vert.x instances running on the same host or on different hosts on the network. Instances can be configured to cluster with each other forming a distributed event bus over which verticles can communicate. We can create a distributed bus encompassing many browsers and servers.
Frontend to fast bidding
Auction web page contains the bidding form and some simple JavaScript which loads current price from the service, opens an event bus connection to the SockJS server and offers bidding. HTML source code of sample web page on which we bid might look like this:
<h3>Auction 1</h3> <div id="error_message"></div> <form> Current price: <span id="current_price"></span> <div> <label for="my_bid_value">Your offer:</label> <input id="my_bid_value" type="text"> <input type="button" onclick="bid();" value="Bid"> </div> <div> Feed: <textarea id="feed" rows="4" cols="50" readonly></textarea> </div> </form>
We use the
vertxbus.js library to create a connection to the event bus.
Vertxbus.js library is a part of the Vert.x distribution.
Vertxbus.js internally uses SockJS library
to send the data to the SockJS server. In the code snippet below we create an instance of the event bus.
The parameter to the constructor is the URI where to connect to the event bus.
Then we register the handler listening on address
auction.<auction_id>. Each client has a possibility of registering
at multiple addresses e.g. when bidding in the auction 1234, they register on the address
auction.1234 etc.
When data arrives in the handler, we change the current price and the bidding feed on the auction’s web page.
function registerHandlerForUpdateCurrentPriceAndFeed() { var eventBus = new vertx.EventBus(''); eventBus.onopen = function () { eventBus.registerHandler('auction.' + auction_id, function (message) { document.getElementById('current_price').innerHTML = JSON.parse(message).price; document.getElementById('feed').value += 'New offer: ' + JSON.parse(message).price + '\n'; }); } };
Any user attempt to bid generates a PATCH Ajax request
to the service with information about the new offer made at auction (see
bid() function).
On the server side we publish this information on the event bus to all clients registered to an address.
If you receive an HTTP response status code other than
200 (OK), an error message is displayed on the web page.
function bid() { var newPrice = document.getElementById('my_bid_value').value; var xmlhttp = (window.XMLHttpRequest) ? new XMLHttpRequest() : new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4) { if (xmlhttp.status != 200) { document.getElementById('error_message').innerHTML = 'Sorry, something went wrong.'; } } }; xmlhttp.open("PATCH", "" + auction_id); xmlhttp.setRequestHeader("Content-Type", "application/json"); xmlhttp.send(JSON.stringify({price: newPrice})); };
Auction Service
Now we are going to create a light-weight RESTful auction service. We will send and retrieve data in JSON format.
Let’s start by creating a verticle, the basic package of code that Vert.x executes.
First we need to inherit from
AbstractVerticle
and override the
start method.
Each verticle instance has a member variable called
vertx. This provides access to the Vert.x core API.
The core API is used to do most things in Vert.x, including HTTP, file system access, event bus etc.
For example, to create an HTTP server you call the
createHttpServer method on
vertx instance.
To tell the server to listen on port 8080 for incoming requests you use the
listen method.
We need a router with routes. A router takes an HTTP request and finds the first matching route.
The route can have a handler associated with it, which receives the request
(e.g. route that matches path
/eventbus/* is associated with
eventBusHandler).
We can do something with the request, and then, end it or pass it to the next matching handler.
If you have a lot of handlers it makes sense to split them up into multiple routers.
You can do this by mounting a router at a mount point in another router
(see
auctionApiRouter that corresponds to
/api mount point in code snippet below).
Here’s an example verticle:
public class AuctionServiceVerticle extends AbstractVerticle { @Override public void start() { Router router = Router.router(vertx); router.route("/eventbus/*").handler(eventBusHandler()); router.mountSubRouter("/api", auctionApiRouter()); router.route().failureHandler(errorHandler()); router.route().handler(staticHandler()); vertx.createHttpServer().requestHandler(router::accept).listen(8080); } //… }
Now we’ll look at things in more detail. We’ll discuss Vert.x features used in verticle: error handler, SockJS handler, body handler, shared data, static handler and routing based on method, path etc.
Error handler
As well as setting handlers to handle requests you can also set a handler for failures in routing.
Failure in routing occurs if a handler throws an exception, or if a handler calls
fail method.
To render error pages we use error handler provides by Vert.x:
private ErrorHandler errorHandler() { return ErrorHandler.create(); }
SockJS handler
Vert.x provides SockJS handler with the event bus bridge which extends the server-side Vert.x event bus into client side JavaScript.
Configuring the bridge to tell it which messages should pass through is easy.
You can specify which matches you want to allow for inbound and outbound traffic
using the
BridgeOptions.
If a message is outbound, before sending it from the server to the client side JavaScript,
Vert.x will look through any outbound permitted matches. In code snippet below we allow any messages
from addresses starting with “auction.” and ending with digits (e.g.
auction.1,
auction.100 etc).
If you want to be notified when an event occurs on the bridge you can provide a handler when calling the bridge.
For example,
SOCKET_CREATED event will occur when a new SockJS socket is created.
The event is an instance of
Future.
When you are finished handling the event you can complete the future with “true” to enable further processing.
To start the bridge simply call
bridge method on the SockJS handler:
private SockJSHandler eventBusHandler() { BridgeOptions options = new BridgeOptions() .addOutboundPermitted(new PermittedOptions().setAddressRegex("auction\\.[0-9]+")); return SockJSHandler.create(vertx).bridge(options, event -> { if (event.type() == BridgeEvent.Type.SOCKET_CREATED) { logger.info("A socket was created"); } event.complete(true); }); }
Body handler
The BodyHandler allows you to retrieve the request body, limit the body size and to handle the file upload.
Body handler should be on a matching route for any requests that require this functionality.
We need BodyHandler during the bidding process (PATCH method request
/auctions/<auction_id> contains request body
with information about a new offer made at auction). Creating a new body handler is simple:
BodyHandler.create();
If request body is in JSON format, you can get it with
getBodyAsJson method.
Shared data
Shared data contains functionality that allows you to safely share the data between different applications in the same Vert.x instance or across a cluster of Vert.x instances. Shared data includes local shared maps, distributed, cluster-wide maps, asynchronous cluster-wide locks and asynchronous cluster-wide counters.
To simplify the application we use the local shared map offer by Vert.x to save information about auctions. The local shared map allows you to share data between different verticles in the same Vert.x instance. To prevent issues due to mutable data, Vert.x only allows simple immutable types such as number, string, Boolean or Buffer to be used in local shared map. Here’s an example of using a shared local map in an auction service:
public class AuctionRepository { //… public Optional<Auction> getById(String auctionId) { LocalMap<String, String> auctionSharedData = this.sharedData.getLocalMap(auctionId); return Optional.of(auctionSharedData) .filter(m -> !m.isEmpty()) .map(this::convertToAuction); } public void save(Auction auction) { LocalMap<String, String> auctionSharedData = this.sharedData.getLocalMap(auction.getId()); auctionSharedData.put("id", auction.getId()); auctionSharedData.put("price", auction.getPrice()); } //… }
If you want to store auction data in a database, Vert.x provides a few different asynchronous clients for accessing various data storages (MongoDB, Redis or JDBC client).
Auction API
Vert.x lets you route HTTP requests to different handlers based on pattern matching on the request path.
It also enables you to extract values from the path and use them as parameters in the request.
Corresponding methods exist for each HTTP method. You can provide as many matchers as you like
and they are evaluated in the order you added them. The first matching one will receive the request.
If no routes match the request, a
404 status will be returned.
This functionality is particularly useful when developing REST-style web applications.
If you would like to read something more about designing RESTful APIs see the article
Designing RESTful API.
To extract parameters from the path, you can use the colon character to denote the name of a parameter. Regular expressions can also be used to extract more complex matches. Any parameters extracted by pattern matching are added to the map of request parameters.
Consumes
describes which MIME types the handler can consume.
By using
produces
you define which MIME types the route produces.
In the code below the routes will match any request with
content-type header
and
accept header that matches
application/json.
Let’s look at an example of a subrouter mounted on the main router which was created in
start method in verticle:
private Router auctionApiRouter() { AuctionRepository repository = new AuctionRepository(vertx.sharedData()); AuctionValidator validator = new AuctionValidator(repository); AuctionHandler handler = new AuctionHandler(repository, validator); Router router = Router.router(vertx); router.route().handler(BodyHandler.create()); router.route().consumes("application/json"); router.route().produces("application/json"); router.get("/auctions/:id").handler(handler::handleGetAuction); router.patch("/auctions/:id").handler(handler::handleChangeAuctionPrice); return router; }
The GET request returns auction data, while the PATCH method request allows you to bid up in the auction.
Let’s focus on the more interesting method, namely
handleChangeAuctionPrice.
In the simplest terms, the method might look like this:
public void handleChangeAuctionPrice(RoutingContext context) { String auctionId = context.request().getParam("id"); Auction auction = new Auction( auctionId, new BigDecimal(context.getBodyAsJson().getString("price")) ); this.repository.save(auction); context.vertx().eventBus().publish("auction." + auctionId, context.getBodyAsString()); context.response() .setStatusCode(200) .end(); }
PATCH request to
/auctions/1 would result in variable
auctionId getting the value 1.
We save a new offer in the auction and then publish this information on the event bus to all clients
registered on the address on the client side JavaScript.
After you have finished with the HTTP response you must call the
end function on it.
If you don’t end the response in handler, you should call
so that another matching route can handle the request.
Static handler
Vert.x provides the handler for serving static web resources.
The default directory from which static files are served is
webroot, but this can be configured.
By default the static handler will set cache headers to enable browsers to cache files.
Setting cache headers can be disabled with
setCachingEnabled method.
To serve the auction HTML page, JS files (and other static files) from auction service, you can create a static handler like this:
private StaticHandler staticHandler() { return StaticHandler.create() .setCachingEnabled(false); }
Let’s run!
Full application code is available on github.
Clone the repository and run
./gradlew run.
Open one or more browsers and point them to. Now you can bid in auction:
Summary
The expectations of users for interactivity with web applications have changed over the past few years. Users during bidding in auction no longer want to press the refresh button to check if the price has changed or the auction is over. Instead, they expect to see the updates in application in real-time.
This article presents the outline of a simple application that allows real-time bidding. Due to the fact that WebSockets are not supported by all browsers we used the SockJS library. We created a lightweight, high-performance and scalable microservice written in Java and based on Vert.x. We discussed what Vert.x offers: an actor-like concurrency model, a distributed event bus and an elegant API that allows you to create applications in no time.
The version 3.0 of Vert.x brought a lot of interesting features. I hope this article encouraged you to familiarize yourself with the capabilities offered by this tool-kit. | https://allegro.tech/2015/11/real-time-web-application-with-websockets-and-vert-x.html | CC-MAIN-2018-47 | refinedweb | 2,699 | 50.23 |
Making
Reminded me about one Symbian project...
I suppose we won't have to wait long for something similar using Rpi (or other really low cost boards / kits)....
There's no standard hardware to target, for a start. Even if you can identify certain platforms that might qualify as "standard" they're either locked down, or don't have specifications or open drivers for various bits of their hardware. Where they do, the hardware is usually very specialised with no room to try different things: for example most mobile video hardware expects the OS to interface to it as some form of OpenGL ES. On top of that, a large part of a phones function is radio and managing the radio interface is hard, and boring.
The effort to reward ratio simply isn't there.
Edited 2013-04-15 23:09 UTC
Poke your nose out of your house, see the daylight, go watch a good movie, enjoy life. Your OS your delighted worked on for years is just par with your C64 memories.
Not everyone wants to feel crippled with hard to use OS command lines just to fucking browse the web or edit digital photo taken 2 minutes ago. Your OS is unusable in the vast majority of cases, Android is usable.
Make up your mind. Serious business is serious, homebrew developments stays in the niche.
Kochise?
Several factors combine to cause the current situation.
As Thom explains it, smartphones are more personal, and some of us own quite unbrickable units. Smartphones and tablets are also productivity tools, so using alternative operating systems are often not a practical option.
Older programmers are probably hesitant to invest so much effort in something so high-risk as a mobile OS, when they have to think about their retirement and their "financial safety". You have to get deals with the device manufacturers, the carriers and the developpers themselves. Also, most mobile developers I've met are quite pragmatic: bring food to the table is their objective when they make paid applications; that means picking established platforms to work on.
The last part, THE reason, in my opinion, why young developpers aren't picking up is quite simple: technical skills, or the lack thereof. Being a student in computer science (first year, but I've had interest in computing since my dad had a Microspot 25SX/80 laptop that ran DOS and, my favorite, QBASIC), I can tell that even many computer science students have no prior experience in programming, mostly because they simply did not need to. In the early times of personal computing, some people had to enter the BASIC interpreter of their computer from a small monitor program that allowed to enter code in hexadecimal, or, if lucky, in assembler. Most of my peers had never installed an operating system on their own before our class on system administration two weeks ago. There is a knowledge gap between the "kids of the early personal computing era" and the "kids of today", at the same age. My programming teacher wrote IRQ routines for his own games when he was 16. Nowadays, ask the average gamer what an IRQ is, and while some will know, they will be at least at 1.5 standard deviation from the average.
Today, most people press the power button and wait for a minute or so. Most of my peers had their first contact with computers that ran Windows 98. They used Office for school stuff, and played games that they bought. I consider myself lucky to have had access to a DOS laptop for several years, as this pushed me towards programming (one of my first programming successes was implementing load/save game functionality in SNAKE.BAS).
Today, the booting process of a PC (which, while having quirks, is quite standard, and low-risk, see below) is complicated enough that we spent three classes studying it, and most of my peers still don't understand it. I know there are bright students out there (working on Haiku, among others; impressive job there!), but they are far from being common. Also, while writing a simple "OS" (terminal+keyboard, no user space) is at the hand of a persevering student, designing one that can be maintained over the long run takes experience (and quite frankly, talent), which few students have.
Also, one has to keep in mind the complexities of working with a closed platform. While Android has helped greatly in that regard, with a surprising number of phones and tablets having open source code, it remains problematic to develop for mobile hardware because of the lack of documentation. Also, while each phone model has somewhat close hardware (minor hardware revisions sometimes happen to correct bugs), there is a much greater platform diversity than in the PC world, because of the lack of standardization. There are several SoCs, and these chips are extremely configurable. There is no standard setup for GPIOs across multiple devices, the pin routings are done on a board per board basis (so that can change from one hardware revision to the other, or from a model in a series to another), power management is sometimes done by a separate chip that can vary even on the same model, etc. Moreover, as someone else pointed out, dealing with radios (especially Wifi and/or Bluetooth, cell modems are normally handled by a separate operating system on another CPU, except for Symbian devices) can get very complicated.
There is also the issue of risk: PCs are relatively low-risk, and (barring UEFI bugs), it is quite unlikely to brick one's computer by testing one's own hobby perating system; at worst you will wipe your hard drive, or the thing won't work. Smartphones and tablets come with complicated pin routings that are not always reconfigured upon bootup (yes, I'm looking at you, dear Lenovo A107), leading to a bricked unit (hopefully recoverable, but not always). Or, if one configures power management incorrectly, it is possible to destroy the power charging circuits. There are many perils in mobile devices, so even young people with some experience and/or interest are wary and very prudent.
And finally, even when one has the technical skills, one has to have interest. And few people are interested in complicated stuff. That's another major factor.
I think this combination of factors is the cause for the lack of alternative mobile operating systems.
Also, I think the lack of technical skills in modern computing-interested youth is one of the major causes of the decline of the alternative OS scene: no one is there to pick the project up after the original creator leaves because of real-life concerns. There is also the fact that for every successful project, a few hundred never gather steam, and for everyone of these, a few hundred don't make it to the public (I know mine did not, however, I proved myself I was able to write a simple OS, and started having design ideas that are ... interesting, to say the least).
ssys,
I sort of agree, however when we talk about people today having fewer skills, it's important to realize that we're talking in terms of percentages of programmers rather than absolute numbers. Today there are many more programmers able to do the low level grunt-work that went into OS dev back then, I'd honestly include myself. But we're a much smaller percentage of the overall software engineering market because the pace of growth on high level development has far exceeded low level development.
If we ask why more indy OSes haven't showed up on mobile, my own opinion is that Vanders is on the money. Not only do locked, proprietary, and non-standard devices impede efforts to actually build an indy OS, but they also *severely* impede our ability to distribute one as well. If I write a mobile OS, most of my friends will not be able to run it on their hardware without my writing new drivers for them. It's not like it used to be with traditional computers (pc, commodore before that, etc) where anyone could share their creations freely via the sneaker-net (floppy disks). It doesn't help at all that manufacturers are deliberately handicapping third party software.
Edited 2013-04-16 00:24 UTC.
.
I agree, the argument about younger developers being less skilled is rubbish. And I'm an old one. :-)
The developers marketplace has simply changed. It needs less low-level than before because the low-level stuffs was immature in the past, where skilled people where needed. It's not the case anymore.
The application frameworks was immature (if even existant) too, they're not anymore, quite the contrary.
They're very large framework, well-documented, comes with many samples and so. No such thing in the past.
Younger developers are not less good than us old ones, their skills are just focused where there is demand, aka application & web service.
The low-level skilleds developers knows that's true, simply because there is no more that need for their skilled, beside hobby OS... Even they have moved to such skills set, should be telling enough.
Add to that ever increasing technology change rate and economy world race, and it's no wonder the focus is now on writing and selling as fast as possible applications, services, such stuffs.
Money is no more in low-level software.
Money is still in selling hardware and high-level small softwares and web services. Hence the app stores success and cloud services.
If the new generation spends more time doing social stuff how can they be just as good or even better?
I'm not trying to say the current generation aren't any good, but programming in the past involved writing ALL your own code, no libraries or API's, without Google and using Assembler or other less modern languages.
Today a lot of stuff is pre-made and by Googling you can easily link those pre-build blocks to make something that works.
The positive effect is that it's relative easy to code something, but the negative is that when something doesn't work the finger point starts. A bug in a library, wrong version of .NET, not the right DLL version, etc...
A lof of code of a program, maybe even most code, a computer actually runs isn't written by the coder. (I suspect, I'm not a coder)
Today I changed our backup system so it logs results in to a database, which you can access via a web page. It involves Python, PHP and MySQL. I'm not an expert in any, worse I might not even qualify as a novice. But I did it, it works, thanks to Bing, copy/paste and some editing.
Today a lot of stuff is pre-made and by Googling you can easily link those pre-build blocks to make something that works.
The older generation of computer programmers were shit compared to the even older generation who had to build their own transistors and circuits. They had no assembler, or even a keyboard and monitor.
I never said anyone was shit. My point is: doesn't all the pre-build stuff limit the need for logical thinking and problem solving.
As it's easy to create something that works one can assume a lot of rather dodgy coders have a coding job, because they have one eye in the land of the blind..
No, I have to disagree. For example jQuery lets me not have to worry about how I implement the event handler cross browser (until IE9 it used a different method). I can simply concentrate on dealing with the event.
I think now that we have a better understanding of some of the patterns in which we write code now and software development is better understood.
Trust me those types stand out like sore thumb. We had one that thought development was copy and pasting code off the internet ... he is no longer with us.
While you can do a lot without having to have a full conceptual understanding, in a decent organisation where people actually want to improve whatever they are developing ... these people tend to be de-hired.
Edited 2013-04-16 14:07 UTC
Recently there was this guy who coded at a company, but actually outsourced his tasks to Chinese coders. He made quite a lot of money before they figured him out. He even attained guru status amongst his co-workers.
Anyway, those bab coders may stick out, but I´m pretty sure they find work somewhere.
We use an ERP product which works rather well on small databases, but gives performance problems on larger onces. After months of finger pointing and denial it turned out they did a crappy job on the database routines. To say they are not efficient is a serious understatement.
Maybe they aren´t bad coders, but they are surely at least mediocre.
There are certain environments where the less/in-capable are able to survive. I suggest reading a few random articles on thedailywtf.com, there is quite a lot of wisdom concerning the industry there.
TBH You have to ask yourself: Does it matter if the code isn't good as long as it fulfils the requirements?
We can talk about maintainability, but realistically I have written 5 minute hacks that have stayed in production for longer than some of the really tidy code I have written.
There is no right answer, but it is certainly something to take stock of.
Edited 2013-04-16 18:41 UTC
I don't mean to split hairs, but even in those days people who built their own circuits were generally known as electronics engineers and not programmers or software engineers. Any kind of software that they wrote usually was to provide minimal support needed to get their hardware work in a larger system. These kinds of software are usually referred to as firmware for this very reason.
If you think that firmware writers are good or capable programmers, then you should spend ample amount of time crufty BIOS code hell to see why a software engineering is a sorely needed discipline at that time.
Yes, but those electronics engineers probably had the same attitude to programmers as some of these older programmers have to today's programmers. And those older programmers probably don't think today's programmers should be called the same thing.
Frameworks and Scripts give you the tools to ensure that the well known understood I needn't spend a lot of time on and my brain power can be used for other things.
While a lot of bad coders can get away with Copy & Pasta coding, this has been about since the industry first came about.
I think there is an image of greybeards as if they are zen like martial artists from rural China.
Edited 2013-04-16 11:33 UTC
Sorry, I should have made my main point clearer.
I am not saying that, overall, in their entire career, young developers are worse than the older ones.
I agree with many others that the priority shift is a major cause for the death of the alternative desktop and mobile OS scene.
I also agree that many of the developers of today are indeed more socially balanced (As one can guess, I'm not very much) and career focused (not a bad thing in itself, I too want food).
What I mean is that if you take a developer that grew in the C64 era at 16 years old, and a young developer from today at the same age, the one from the C64 era is probably going to have better technical skills from the need to have them to do whatever he wants on his C64. Nowadays, young people often acquire the technical skills to do such developement close to the time their priorities need to shift and they need to start handling life responsabilities, which means that a smaller percentage of the young developer base can actually work on an OS without much real life consequences.
Since modern OSes are considerably more complex than earlier ones (because users want more features), the amount of work needed on one has gone up faster than the number of developers that can (technical skills and no real life hurdles resulting from it), or could (are ready to learn technical skills and won't have real life consequences).
Making a kernel is exactly the same amount of work it always was. Some things get easier with better hardware.
I made a 64-bit compiler. It was 20,000 lines of code. I had 16 registers that were 64-bit. It was way easier than it would have been with 16-bit segmented DOS.
Hardware now does DMA buffers. You don't have to work hard on efficiency. Sloppy code is okay.
A kernel is easier today. The problem is so much divergent hardware. I guess the hardware is more compilicated -- USB and PCI stuff. If you could just do one device, it wouldn't be bad. USB requires UHCI, EHCI OHCI USB 2.0 USB 3.0 ICH9 ICH10 ICH11 and bunches of other variations of mice and keyboards. It's way more compilccated than it needs to be... but that probably equals what you save on not being pressed for time.
DO NOT BE BRAINWASHED/JEDI MINDTRICKED! A kernel can be done by one person -- 20,000 lines. A compiler can be done by one person -- 20,000 lines. What cannot be done are drivers. What do you think 50 million lines consist of? Drivers.
Edited 2013-04-16 15:33 UTC
Sorry I just have to disagree. There is nothing inherit about breaking down a problem into a set of steps which can be changed into commands.
Being able to break problems down into algorithms isn't inherent on working on ancient platforms.
In fact in some ways it is stifling. There is a chap on developers life that talks about how he is finding it difficult to think like newer developers.
I have the same problem with web development, I am unwilling to do things a certain way because I am worried it won't work on IE6 or IE7 ... it not concious but it sits in the back of my mind.
"Why, when I was a young programmer we had to write the code in the snow with our pee, and a compiler was just a word for the pilot of the hovering dirigible that read the instructions and passed them to the ALU, which was another fellow with an abacus. They would wrap the results around a rock, and drop it on my house when the program would exit. We had to walk uphill..."
."
I think it is very different. OS code from the 90s still works today on x86 because the hardware manufacturers sought to make IBM clones. There were no shortages of published unofficial hardware texts and resources such as ralph brown's famous bios interrupt list for 3rd party developers to do everything microsoft could do and then some. If any manufacturer deviated too far from the defacto-standards, it wouldn't just break indy software, it would break DOS/windows too, as well as thousands of commercial titles. So hardware manufacturers had a strong incentive in making their hardware inter-compatible, which indy devs in the past benefited from greatly.
Today, since the mobile hardware & OS is bundled and sold together, manufacturers don't worry about playing nice with others. They've abandoned hardware standards. Many are not even standard across their own product lines. Hardware that is difficult to reuse could even be deliberate as with planned obsolescence: it encourages customers to continually buy new hardware instead of finding good ways to reuse the old hardware. The situation today may help hardware manufacturers sell more units, but it's absolutely detrimental to indy software developers who have to allocate inordinate amounts of work in model specific hacking (on devices we don't even own to test on ourselves). Most of us don't have the economic means to distribute our own hardware, and even if we did, most of our friends would prefer to run our software on their existing devices rather than have to buy new hardware just to try our OS. So indy OS development is inevitably much less viable than it used to be.
Sadly, I think you are right. Although my own alternative OS never got off the ground for x86, I don't have a strong interest to build an OS for mobile. It was easier to target the golden era x86 devices.. They have well understood and common HW implementations. I almost feel like the Linux kernel is at fault, if they had moved to GPLv3 maybe the HW wouldn't be so opaque. However, since it is open enough for vendors to build upon, yet closed enough for vendors to ship, it leaves alternative "OS" design in the realm of custom ROMs.
However, I *am* a Linux kernel engineer by trade, so if I was hired to do an OS, I *would* just use the kernel I know and love. There isn't a strong need to rebuild a kernel anymore. My ideas all played to niche markets, and if I had to market to general users, why would we ever use anything but the Linux kernel (or BSD kernels, for those that know it better)?
I personally think alternative OSes are pandering to remembrance of things past. There is a lot of fun work there, but we won't move to mobile within five years with that mind set.
kjmph,
I agree that there's very little incentive for vendors to take risks with indy operating systems, and linux is obviously good enough for their needs.
It does disturb me a bit that linux has become such a disproportionate monopoly in the open source space though. This is partly because I like to see alternatives have a healthier share of the open source market, and partly because some of the technical decisions were unfortunate. Linux was designed to be a POSIX compliant OS, which it's done an excellent job at, but I'm not such a fan of POSIX itself. I'd have preferred an OS with better interfaces to have moved ahead of the pack, something like plan 9. However in hindsight it's pretty clear that linux gained converts from the sizable unix market owing to POSIX compatibility, so it's not clear to me that anything with alternative interfaces would have had a real chance.
Edited 2013-04-16 05:30 UTC
People often don't seem to appreciate just how important compatibility is. But it should be obvious that a new OS will struggle to catch on if it can't actually do anything useful, and having access to existing applications makes a big difference.
People often don't seem to appreciate just how important compatibility is. But it should be obvious that a new OS will struggle to catch on if it can't actually do anything useful, and having access to existing applications makes a big difference. "
My logic was legacy is Microsoft and Linux's strength but weakness. Their code is really really awful over the years and the only way to beat them is to mercilessly break with the past. No 32-bit cruft.
You have no idea how hellish their code is. They are in an awful hell.
If you get greedy, you go to hell. I don't care about other country support. I like desktops, not laptops. If you pick a tiny niche, you get heaven. If you try to capture the world, you get cursed.
long long i;
printf("%ld",1234L);
puts(T("I don't even\r\n"));
cout << 123 << 4 << "This is retarded" << endl;
----
In my language. A string by itself is printF if followed by args.
for (i=0;i<100000;i++)
"%d\n",i;
Edited 2013-04-17 01:05 UTC
Linux is not perfect, and mono-cultures are bad.
Eh? Who is "we"? Most hobbyist Operating Systems are just that: a hobby. You're assigning commercial motives to something that isn't commercial.
Linux is a kernel, so it's no wonder people would choose an already made and reasonably well tested kernel. Another kernel no one talks about:
For ITRON there is the well known eCos from Redhat:
The reason we don't see more operating systems is... people look at Linux code and say, "Damn! Complicated!"
My generation had C64s. This generation has Linux. This generation knows nothing but html. My generation knew BASIC and 6502 asm.
My OS is like a C64.
My generation thought an operating system could be made by one person. My generation wrote compilers by themselves.
Just remember -- Linux was written by one person when it existed in 1992.
Edited 2013-04-16 04:09 UTC..
I wrote a 140,000 line alternative OS over the last ten years, including a compiler. In 1990, I was hired to work on Ticketmaster's VAX operating system, so I like to call myself "professional" and not a hobbiest. I work full time.
Maybe, you want me to port it to a mobile platform? Honestly, I detest small computers. I programmed these little 8-bit $1.50 microcontrollers that went on printer toner cartridges so they could be refilled. I don't like wimpy computers. It's interesting for a little while, but I prefer 64-bit 3.3Ghz 8-core computers! A phone computer is nasty with little screen and keyboards. Awful.
Too bad most portable systems (tablets, phones) don't let you do that. If they did, they might be serious equipment that could actually be used to do and store and access real work. Ubuntu for portable computers is supposedly working on that... it's one of the more interesting features. The problem will be, most likely, the fact that you will most likely be stuck with whatever operating system it came with, and whatever specs it came with.
The biggest reason I feel is Mobile environment is highly restrictive compared to desktop counterparts. Long back I was able to write a program (OS??) from scratch which was bootable and can take (very very) few keyboard commands. Though cannot be called as OS, the point is, it was easy to develop such programs in a desktop with a floppy disk. It is very difficult to experiment something like that with current devices due to non-standard based hardware/firmware etc.
I guess it's about expectations.
Any operating system is expected to play MP3s, run Firefox, access USB flash drives, read ext?/FAT/NTFS, support all kinds of network/sound/video cards, provide 3D desktop effects, etc...
The days are over when it was normal to get an "empty" system and you wrote your own software.
Opting for the Linux kernel is logical, but also a bit boring and cheating I think.
Perhaps with the arrival of the Raspberry Pi and similar computers we'll see some alternative operating systems that aren't based on an existing kernel.
/rant on
Why is no one targeting mobile ARM-based devices with a hobby OS? Because it's not worth it. Designers of these have done anything in their power to make it extremely cumbersome to write and use custom OSs, through a combination of...
-An ill-defined architecture, where even the design of the most basic debugging features is left to individual SoCs. Related is the fact that there is not even a standard way to boot an OS: for every new piece of hardware, you have to learn yet another new set of stupid boot-time manipulations*
-Stupid firmwares which can't even charge a battery on their own and throw up ill-documented linux-targeted blobs when queried for a list of available hardware
-Hostile behaviour towards users who try to tamper with their OS by device manufacturers, including an obligation to use security exploits or wipe some user data (that cannot be easily backed up) in other to do so on many devices
-Hostile behaviour towards low-level developers by SoC manufacturers themselves, which will be extremely wary to open up their spec sheets, instead releasing linux-only binary drivers.
Your argument that we can just pick a relatively open piece of hardware like the Nexus devices and hack on it doesn't hold, because developing an interesting hobby OS takes more time than such a device will last.
If you take an x86 computer and write code for it, as long as you stick with standard features (that can already take you pretty far before the need for device-specific drivers emerges), it will also run on x86 computers 10 years away from now. If you take an ARM-based phone and write code for it, it will die in about two years and you'll be left with code that cannot be easily tweaked for another SoC, basically having to rewrite every low-level hardware code all over again.
ARM is a great architecture for embedded development. If what you are into is relatively simple software like that of cars, fridges, or creepy spider robots that walk on walls, you will enjoy the way the architecture lets SoC manufacturers do anything they want, leading to wide availability of cheap special-purpose development boards. But for the kind of general-purpose OS development that exists on x86 or PowerPC, anything ARM-based is, at this point, a nightmare.
* Some people will say that this is also true of x86 bioses. I disagree. There is a world of difference between staring at your screen during boot, holding the Esc/Fxx key pressed when you are told to, then following instructions, and randomly pressing various buttons with no feedback whatsoever in the hope that something useful will happen.
Besides, if I'm not misunderstood, it's extremely difficult to boot an OS from an external media without making changes to the existing OS install on ARM. Or, in other words, you HAVE to break the factory-provided OS install in order to try out something new.
/rant off
Edited 2013-04-16 05:56 UTC
So, if I give you a bootable ARM binary and some chinese Android phone chosen at random, can you run that binary on it without extra documentation about the phone? You're free to install u-boot to this end, if you can manage...
This is indeed possible, but then that wouldn't really be developing for mobile devices at large and more like developing for a tiny niche within the huge world of mobile devices.
If you go this route, you could as well develop for one of the few well-documented ARM development boards, after attaching a touchscreen to it. But I don't think that this is what Thom had in mind when he wrote this article.
Edited 2013-04-16 17:40 UTC
We love standards, ethical ones and hardware ones. We just don't have them yet. So in the all-to-common, worst-case scenarios these days, corporate coders would seem to have the same effect as corporate lawyers. Its a language targeted at protecting a product by cloaking it in a privacy. Code obfuscation and legalese can be intentionally ill-defined. Quoting some of the inspiring phrases:
- have done anything in their power to make it extremely cumbersome
- you have to learn yet another new set...
- hostile behavior towards users... and developers... extremely wary to open up
- rewrite every low-level hardware code all over again
- anything ARM-based is, at this point, a nightmare.
- you HAVE to break the factory-provided...
Of course standard GPL-type licensing is a best-case secenario. Thanks.
I'll bet you can find an emulator to develop a phone operating system.
For PC operating systems, you can run in a virtual machine. They are almost the same performance as native.
It's really easy to try my OS in VMware or QEMU....
>qemu-system-x86_64 -m 500 -cdrom TempleOSCD.ISO -boot d
For PC operating systems, virtual machines take away the glory of an operating system, but actually provide a hardware abstraction layer that's kinda nice. My operating system is practically guarenteed to work in VMware on everybody's 64-bit machine. That's not true of native hardware.
I've been working on several OSes about 10 years ago, even wrote my custom kernels, which served as a great experience gain for myself (1337 5k11lz
). It's just not worth it anymore, because of the sheer complexity required to write even simple kernel and one has to pay his bills (not to mention that the day is only 24 hours minus sleep time) - if you get past the state of the basic kernel stuff, you'll be killed by lack of drivers, which are almost impossible to write when you have no spec from the manufacturer. On PC you can boot OS from anything or simply launch Bochs/QEMU/etc. to get your code working. On embedded devices, you CAN'T. If you had to write your own firmware for the PC mobos, then it would be the same as writing boot code for the phone. Different phones require different configuration, because it's more flexible to configure phone's hardware e.g. using programmable voltage regulators. Many chips require very specific power sequencing to start - again, you need manufacturer's spec. Hardware configuration is often tweaked and stored in EPROM memory, so you can account for e.g. sillicon bugs. You even have to configure memory controller - each device model has it's own memory map. Even if you manage to boot your OS on one model of the phone and get all the hardware working, you'll be soon facing the fact (after thousands of hours of writing drivers), that the phone will be obsolete and not available to buy in couple of years. To boot your OS on the other people's gear, will make them risk bricking their stuff - if you paid $1000 for the phone, you don't want to toss it in the cupboard after failed "experiments", which might have had as well fried the hardware.
OS development serves as a great learning tool, though. Many of the CS students, which are "spewed" by today's schools, have very poor engineering skills (I estimate, based on people I've met, that - per class - there are 0-2 h4x0rs (who have learnt the stuff on their own, anyway), 1-5 skilled people and the rest are "film dummies", just to fill the space). They can't understand the concept of the pointer, let alone develop anything more complex than hello world app (their brains are fried when they have to develop an algorithm for something). Back in the DOS or C64/Amiga/etc. days people had to have some experience in how the operating system or computer works to get anything done. I've noticed that many of the people who've been forced to learn these things are the best paid and the most skilled experts today and can learn any technology that is thrown at them. During development of my OS I've learnt at lot about the electronics, hardware design, FPGAs, low level programming, concurrent programming, thread safety, programming languages, writing compilers, writing drivers and many other things.
I think that the most of people, who have developed their own kernel back in the day, now have other jobs (+ children), they have to pay their bills and buy their own food, which resulted in a priority shift.
Edited 2013-04-16 06:29 UTC
agentj,
"I think that the most of people, who have developed their own kernel back in the day, now have other jobs (+ children), they have to pay their bills and buy their own food, which resulted in a priority shift."
This applies to me as well. I've moved on to web stuff because the low level type of work that has always interested me the most has very little demand for it anymore. I'd happily go back to working on bare metal projects, but I cannot afford to do it for free. Money and bills are a very real limiting factor. If anyone has low level work to do and will pay, please let me know!
You can try to find a job at a game development company. A lot of console programming is bare metal.
twitterfire,
"You can try to find a job at a game development company. A lot of console programming is bare metal."
You know what, I am interested in that. I did make some low level hardware 3d graphic demos in DOS many years ago, but I didn't manage to land a gaming industry job as younger grad. There’s no harm in applying today, these days my family keeps me geographically tied down (edit: suffolk ny usa).
Do you know of any game dev studios that consider telecommuters? I've considered doing independent mobile game development, but the indy mobile devs I know make a pittance and so I am weary following in their footsteps.
Edited 2013-04-16 15:09 UTC
This applies to me as well. [...] Money and bills are a very real limiting factor.
I wonder, does that mean we'll see a resurgence of hobby OS projects when people capable of doing them will start hitting retirement age? :p
zima,
"I wonder, does that mean we'll see a resurgence of hobby OS projects when people capable of doing them will start hitting retirement age? :p"
Interesting observation... it's hard to say. On an NPR program I heard a doctor claim that 50% of the public at 65 begin to show signs of being senile.
Though not quite a mobile computer, the Pi has been used for at least one "alternative" OS: RISC OS.
With its relatively low footprint, RISC OS would make a decent mobile OS, though the GUI would have to be rewritten from the keyboard/mouse-centric desktop to a multitouch GUI.
As for phone OSs: A barrier is that telephony code is complicated and needs to pass strict standards. So it is more likely that alternative-OS mobile devices will target WiFi instead of telephony.
I think a lot of it is a cultural shift. Previously we believed it was possible for a little startup company to come along and compete with the big boys on the market (IBM mainly)
Acorn, Sinclair, APPLE!, MICROSOFT!!
An OS takes a large financial investment for a very small hope of return. WebOS, a wonderful mobile OS, even with Palm leading it couldnt break the market. HP bought it and it couldnt either.
I couldnt write a webOS, let alone give it the backing these companies did. I CAN however write a little phone app and make a few quid on the side..
-I am porting CM10 and 10.1 (JB 4.1 and 4.2) to a phone. porting existing operating systems might be hard enough, writing operating systems from scratch is orders of magnitude harder
-my CM10 ports will have far more users than if I start an alternative/hobby operating system myself; as a programmer, I tend to write software for people to use, not just for myself or for the sake of it
-reverse engineering mobile socs and chips is not worthing, in 1 year every soc will be oobsolete and a new one will take its place, mobile hardware is too much a moving target
-I make a distinction between alternative oses like Haiku, Syllable, ReactOS, Skyos and hobbist operating systems which are in general of lesser quality, tend to be unfinished, written to learn os development or just to try out various things and not meant to be usable one day
-when the likes of Haiku, ReactOS, Syllable, Skyos, AtheOS were planned, the devs didn't see them as hobbist, instead they thought and hoped that their operating systems might take of and people will use them
-alternative operating systems for desktop failed not only because lack of quality due to lack of manpower, but they also died because lack of critical mass and lack of apps
-Ubuntu, Sailfish, FirefoxOS will probably fail
-potential developers already know from experience what happened to "alternative" oses on other platforms, they know they can't succeed, their oses would never take off, so why waste their time and not do something productive instead?
-there might appear pure hobbist oses for mobile, like there are some for PC but those will be of even lower quality than "alternative" oses
-there might be some research oses and/or kernels in the future
I think one or two of them will succeed, for small values of "succeed".
I doubt FirefoxOS will grow to more than a few percent of the western markets (at best), but it may grow to a sizable fraction of emerging markets by being simple for first-timers, buoyed by the well-known Firefox brand, and devices running it are at the starting gate today. It's the obvious low-end play.
Ubuntu has an opening to be the third player at the high end simply because it uses the Android kernel as-is without the Google infrastructure ("Anything but Android" from Samsung and the dwarves is their best shot). It helps to have the long-term independent financial backing of an entrepreneur with solid industry connections, of course, and the "one device, many markets" strategy is kinda cool, too.
Why not WP, iOS, BB, Sailfish, webOS, or an upstart? The Windows desktop monoculture nightmare is too fresh and Microsoft too dysfunctional to grow beyond a few percent, I think, and iOS and BB are successful but forever single-vendor. I like both Sailfish and webOS, but I suspect their time has passed and I don't see a compelling reason for a vendor to choose one of them over FirefoxOS and Ubuntu. And an upstart would probably need to also invent a new device sub-category, as Apple did with non-stylus slab phones, to get a toe hold.
Just my opinions. Of course, I fully expected MeeGo to challenge Android back in the day. *shrugs*
What I find interesting actually is the lack of projects in this area coming from universities.
There used to be some amazing experimental projects (Plan 9, Oberon, Squeak) from universities and labs and I'm not aware of such experiments done on mobile devices.
It's also interesting to note that when some consistent hardware (arduino, Raspberry Pi) appears, it kind of encourages innovation more.
Someone just went full-on Linux advertisement. Excuse my SPAM.
Why don't you all try my operating system in VMware? Just spend a couple hours. You visit this site, so you have an interest in operating systems. You owe it to yourself.
You know how there are imparitive and functional programmers? A well educated person informs himself. Spend a couple hours with TempleOS. Why wouldn't you?
I have tried Linux and use Windows. At least I tried Linux. Personally, I get annoyed with file permissions. I have no use for that. I tried recursively chmod'ing to 777 on all files. Didn't work.
Did a C64 have permissions? It's just for me, not a mainframe for a hundred people.
People seem to love Git, for source code control. Maybe, I can get them to file my taxes -- crazy paperwork lovers.
Edited 2013-04-16 13:00 UTC
File permissions are here for a reason : business stuff. TempleOS for your own entertainment. Two different targets. Why asking to remove file permissions ? Because YOU don't need them ? How selfish... Your God should have taught you with humility and freedom of will.
Kochise
I don't know why you keep mentioning the C64. It was a great machine in the eighties, and like many, I learned to program on one.
But the eighties were a long time ago now. The world has moved on, and such a system really isn't useful for very much anymore.
I actually think that regular people want permissions, they just don't want to bother with them.
But I agree that Linux & friends taking over the desktop was a far fetched idea considering how badly designed it was for the purpose.
Maybe if all those people had put all that effort effort into something like Haiku things would look different today.
Paperwork is a funny thing. It's satisfying on some level. The trend in programming is to increase the paperwork -- more type checking, security, namespaces and source control. Do you see the concept -- more bureaucracy to get the same thing done. You no longer do one thing to get something done, you do five things to get something done. Admitedly, some of this is necessary for being a professional programmer, today. I thought it might be nice to create a recreational environment and go the extreme of anti-paperwork, to make it more fun and productive and simpler for beginners.
Getter and Setter functions, for example, can be nice, admitedly, but to stay true to my principles, I did everything directly. I got the notion of minimal layers of functional depth -- minimal abstraction -- put you almost right on the hardware, if possible. BASIC on a C64 poked hardware ports. You get intimitaly knowledgeable and it's satisfying, in another way.
Edited 2013-04-16 14:22 UTC
Name spaces are not that annoying, but I pity kids with yet another thing to understand.
Everybody obsesses on scaling issues and doesn't want hyperlinear scaling. If a 140,000 line operating system grew to 15Million, there would be scaling issues.
I had the brilliant idea that hyperlinear scaling applies both ways! If you can make an application only 1000 lines, you can rightfully use short variables, even global variables.
It makes me happy to have Linus on the side of reason in the liberal vs conservative debate -- he likes C. I'm okay with C++. The biggest practical problem with C++ is grep on function names.
You can Grep FileRead()
You cannot grep File::Read()!
I love to groom my code with global string find and replace. That can't be good with source control!
I saw someone define "liberal" and "conservative" as it applies to programmers. I think academic is liberal and industry is conservative. This person said sensible things, with one major exception. He classified assembly language as liberal. What? His logic was that there are no rules.
I took 5 assembly language courses in school -- geared to developing embedded hardware software, an operating system course and a compiler course and a computer graphics course.
In my embedded software course we had 2 week projects and we were graded on a curve on how short they were. The last project was a scheduler, to do multitasking on a little 8-bit motorola microcontroller.
A scheduler just saves registers, restores registers and schedules a timer to interrupt at a certain time. Tasks ask to run at a certain time. You manage that, its not hard.
What is an operating system, but a scheduler?
I did a compiler. Linus learned people won't call a kernel an operating system.
They do exist here are a few:
SHR -
Nemo Mobile -
QTMoko -
The problem is by and large the hardware is much less standardized, much less open and there is much less interest.
The problem is that if you get any of the smartphone platforms, most are running some flavor of UNIX. iOS? Darwin/BSD. Android? Forked Linux with WAIT_LOCKs, recently merged back. Blackberry? QNX.
If you want to invent a NON-POSIX OS, you won't have an app base. If you want to do POSIX, why will yours be better than the existing alternatives?
You can install existing linux distos for ARM right not on rooted phones and tablets.
So the problem with doing a DIY OS is that it would be a dead-end. It might be a neat hack - there are many. But if you want to actually use it for something, the infrastructure will be harder. For example, how do you do a bluetooth and wifi stack?
There is now the Raspberry Pi - that might spark some interest and development. It is cheaper, the hardware and software are open. Something might happen there.
I learned a lesson. I was going to do filesystem metadata, perhaps critic star ratings on files, or something. I put FAT32 support in there. Then, I lost enthusiasm for file system meta data because I could not copy it onto my FAT partition, could I? I would have to make a file or something.
The lesson is -- sever all ties to other operating systems.
You're gonna use gcc? That is a huge trojen horse. It's going to make you POSIX. It's means you file format will be elf. It means your source code will be ASCII. Suddenly, it looks exactly like Linux!
I redesigned absolutely everything. There is no other sane choice. When Unix was invented, they made ASCII and C. The C64 did not use ASCII. I use 8-bit unsigned IBM PC codes plus graphics.
A related bit of wisdom. If you have a hardware like a mouse, some have three button. If you pass that on to your API for your users... could someone make a game requiring a 3 button mouse? You've just clusterf--ked all games. They have to split code and interfaces.
Multiply different hardware by sound and graphics and everything and its a total clusterf--k of divergent branches of software for all the applications made by all users.
What good is 4 speakers? What good is particle effects or a physics engine in one graphics card.
If you have a physics engine in some graphics card... it's completely useless. Nobody in their right mind is going to do custom code for some stupid piece of hardware!
Edited 2013-04-16 18:14 UTC
omg Thom, YOU MISS best time in computer history... YOU have only scraps
"so now we're back to slashdot 2001 debating what an operating system is."
We *could* debate semantics like whether "hobbyist mobile operating systems" should include android forks or not, but that's kind of missing Thom's point. He asks why there are no alternative hobby operating systems (not just forks) in the mobile realm like there were in the desktop realm.
Too little credit has been given to BIOS, which was the backbone of early OS's. With that as a starter, a DOS could be written as a term paper. New OS development, if any, will have it's roots in classroom assignments, which means starting from somewhere well past the CPU. If you can consider Linux to be the new "BIOS" then give credit to those who build an OS on top of it. | http://www.osnews.com/comments/26950 | CC-MAIN-2013-48 | refinedweb | 8,571 | 71.04 |
Just for fun. Inspired by this crash course in Django. It is a little different because we decided to:
There is no installation process. Download web2py from, unzip and click on "web2py".
It will ask you to choose a one time administrator password. Choose one.
Visit the admin interface at
Login with the password you choose above. In the textbox on the right type the name for the new app, for example "blog", and press "submit". You will be redirected to a design page that let you edit the new page.
Click on "database administration" to get a web based interface to your database.
In the design page create a new model file called "db_blog.py" by typing the name in appropriate textbox. In the file write:
db.define_table('post', Field('title',length=256), Field('body','text',requires=IS_NOT_EMPTY()), Field('author',db.auth_user)) db.define_table('comment', Field('post',db.post,writable=False,readable=False), Field('author',db.auth_user,writable=False,readable=False), Field('body','text',requires=IS_NOT_EMPTY()))
Go to the database administrative interface to add some posts:
IMPORTANT!!! Make sure you register and login with the application else posts will have no author and they cannot be displayed.
In the design page edit the controller "default.py" by clicking on the appropriate link, edit the "index" function and create a "view_post" function as show below:
def index(): return dict(posts=db().select(db.post.ALL)) def view_post(): post = db.post[request.args(0)] or redirect(URL(r=request,f='index')) if auth.is_logged_in(): db.comment.post.default = post.id db.comment.author.default = auth.user.id form = crud.create(db.comment) else: form = A("login to comment",_href=URL(r=request,f='user/login')) comments = db(db.comment.post==post.id).select(db.comment.ALL) return dict(post=post, form=form, comments=comments)
At this point the app is fully working although you may not like the default views so...
Edit the view "default/index.html" associated to the "index()" action. Replace the content of the view with:
{{extend 'layout.html'}} {{from gluon.contrib.markdown import WIKI}} <h1>Posts</h1> {{ for post in posts:}} <h2> <a href="{{=URL(r=request,f='view_post',args=post.id)}}"> {{=post.title}} </a> </h2> {{=WIKI(post.body)}} {{ pass }}
Create now a view for a post and its comments. Do this by creating a new view file "default/view_post.html"
{{extend 'layout.html'}} {{from gluon.contrib.markdown import WIKI}} <h1>{{=post.title}}</h1> <h2>by {{=post.author.first_name}}</h2> {{=WIKI(post.body)}} <h2>Comments</h2> {{for comment in comments:}} <blockquote> {{=WIKI(comment.body)}} <em>by {{=comment.author.first_name}}</em> </blockquote> {{ pass }} {{=form}}
You can now visit your blog at
and view the individual blog pages at ...
Commenting requires login. You can login, register, manage your profile, etc here:
You can administer you blog (manage accounts, create and delete posts and comments) at
You can edit your source code online via
If you do not like the default URLs you can change the mapping by editing
routes.py
You can also do reverse url mapping so that you do not need to edit code or views when url changes. Links will not break.
This code works on GAE as it is. It does not need any tweaking. You need to run the source version of web2py, not the binary distribution. You must edit the provided "app.yaml" in the main web2py folder and replace "web2py" with the name of your GAE application-id. Then deploy it on GAE
cd /path/to/where/web2py/is appcfg.py update web2py
You can now find your app running at | http://www.web2py.com/AlterEgo/default/show/253 | CC-MAIN-2015-35 | refinedweb | 599 | 53.07 |
From our sponsor: Reach inboxes when it matters most. Instantly deliver transactional emails to your customers.
In this article we will explore many of the new features available from GSAP 3. The GreenSock animation library is a JavaScript library many front-end developers turn to because it can be easy to get started and you can create powerful animations with a lot of control. Now with GSAP 3 getting started with GreenSock is even easier.
Some of the new features we will cover in this article are:
- GreenSock’s smaller file size
- A Simplified API which offers a newer syntax
- Defaults in timelines
- Easier to use with build tools and bundlers
- Advanced stagger everywhere!
- Keyframes
- MotionPath and MotionPath plugin
- use of Relative “>” and “<” position prefix in place of labels in Timelines
- The new “effects” extensibility
- Utility methods
…and more!
GreenSock’s smaller file size
First and foremost the GreenSock library is now even smaller. It still packs all the amazing features I love, plus more (50+ more to be exact). But it is now about half the size! We will see some of the reasons below like the new simplified API but at its core GSAP was completely rebuilt as modern ES modules.
A Simplified API
With the new version of GreenSock we no longer have to decide whether we want to use TweenMax, TweenLite, TimelineMax, or TimelineLite. Now, everything is in a single simplified API so instead of code that looks like this:
TweenMax.to('.box', 1, { scale: 0.5, y: 20, ease: Elastic.easeOut.config( 1, 0.3) })
We can write this instead:
gsap.to(".box1",{ duration: 1, scale: 0.5, y: 20 // or you can now write translateY: 20, ease: "elastic(1, 0.3)", });
Creating Timelines is easier too. Instead of using new TimelineMax() or new TimelineLite() to create a timeline, you now just use gsap.timeline() (simpler for chaining).
Here is an example of the first syntax change. Note that the old syntax still works in GSAP 3 for backward compatibility. According to GreenSock, most legacy code still works great.
See the Pen GreenSock New vs Old syntax by Christina Gorton (@cgorton) on CodePen.dark
Duration
Previously, the animation’s duration was defined as its own parameter directly after the target element. Like this:
TweenMax.to('.box', 1, {})
With the new version, duration is defined in the same vars object as the rest of the properties you animate and therefore is more explicit.
gsap.to(".box",{ duration: 2, });
This adds several benefits such as improved readability. After working with and teaching GSAP for a while now, I agree that having an explicit duration property is helpful for anyone new to GreenSock and those of us who are more experienced. This isn’t the only thing the new API improves though. The other benefits will become more obvious when we look at defaults in timelines and the new Keyframes.
Defaults in timelines
This new feature of GSAP is really wonderful for anyone who creates longer animations with gsap.timeline(). In the past when I would create long animations I would have to add the same properties like ease, duration, and more to each element I was animating in a timeline. Now with defaults I can define default properties that will be used for all elements that are animated unless I specify otherwise. This can greatly decrease the amount of code you are writing for each timeline animation.
Let’s take a look at an example:
This Pen shows a couple of the new features in GSAP 3 but for now we will focus on the defaults property.
See the Pen Quidditch motionPath by Christina Gorton (@cgorton) on CodePen.dark
I use defaults in a few places in this pen but one timeline in particular shows off its power. At the beginning of this timeline I set defaults for the duration, ease, yoyo, repeat, and the autoAlpha property. Now instead of writing the same properties for each tween I can write it one time.
const moving = () => { let tl = new gsap.timeline({ defaults: { duration: .02, ease: "back(1.4)", yoyo: true, repeat: 1, autoAlpha: 1 } }) tl.to('.wing1',{}) .to('.wing2',{}) .to('.wing3',{}) return tl; }
Without the defaults my code for this timeline would look like this:
const moving = () => { let tl = gsap.timeline() tl.to('.wing1',{ duration: .02, ease: "back(1.4)", yoyo: true, repeat: 1, autoAlpha: 1 }) .to('.wing2',{ duration: .02, ease: "back(1.4)", yoyo: true, repeat: 1, autoAlpha: 1 }) .to('.wing3',{ duration: .02, ease: "back(1.4)", yoyo: true, repeat: 1, autoAlpha: 1 }) return tl; }
That is around a 10 line difference in code!
Use of Relative > and < position prefix in place of labels in Timelines
This is another cool feature to help with your timeline animations. Typically when creating a timeline I create labels that I then use to add delays or set the position of my Tweens.
As an example I would use tl.add() to add a label then add it to my tween along with the amount of delay I want to use relative to that label.
The way I previously used labels would look something like this:
gsap.timeline() .add("s") .to(“.box1", { ... }, "s") .to(“.box2", { ... }, "s") .to(“.box3", { ... }, "s+=0.8") .to(“.box4", { ... }, "s+=0.8”);
With > and < you no longer need to add a label.
"Think of them like pointers - "<" points to the start, ">" points to the end (of the most recently-added animation)."
- "<" references the most recently-added animation's START time
- ">" references the most recently-added animation's END time
So now a timeline could look more like this:
gsap.timeline() .to(“.box1", { ... }) .to(“.box2", { ... }, "<") .to(“.box3", { ... }, "<0.8") .to(“.box4", { ... }, "<”);
And you can offset things with numbers like I do in this example:
See the Pen MotionPath GreenSock v3 by Christina Gorton (@cgorton) on CodePen.dark
Stagger all the things
Previously in GSAP to stagger animations you had to define it at the beginning of a tween with either a staggerTo(), staggerFrom(), or staggerFromTo() method. In GSAP 3 this is no longer the case. You can simply define your stagger in the vars object like this:
tl.to(".needle",{ scale: 1, delay:0.5, stagger: 0.5 //simple stagger of 0.5 seconds },"start+=1")
...or for a more advanced stagger you can add extra properties like this:
tl.to(".needle",{ scale: 1, delay:0.5, stagger: { amount: 0.5, // the total amount of time (in seconds) that gets split up among all the staggers. from: "center" // the position in the array from which the stagger will emanate } },"start+=1")
This animation uses staggers in several places. like the needles. Check out all the staggers in this pen:
See the Pen Cute Cactus stagger by Christina Gorton (@cgorton) on CodePen.dark
Easier to use with build tools and bundlers
When I have worked on Vue or React projects in the past working with GreenSock could be a little bit tricky depending on the features I wanted to use.
For example in this Codesandbox I had to import in TweenMax, TimelineMax and any ease that I wanted to use.
import { TweenMax, TimelineMax, Elastic, Back} from "gsap";
Now with GSAP 3 my import looks like this:
import gsap from "gsap";
You no longer have to add named imports for each feature since they are now in one simplified API. You may still need to import extra plugins for special animation features like morphing, scrollTo, motion paths, etc.
Keyframes
If you have ever worked with CSS animations then keyframes will be familiar to you.
So what are keyframes for in GreenSock?
In the past if you wanted to animate the same set of targets to different states sequentially (like "move over, then up, then spin"), you would need to create a new tween for each part of the sequence. The new keyframes feature lets us do that in one Tween!
With This property you can pass an array of keyframes in the same vars objects where you typically define properties to animate and the animations will be nicely sequenced. You can also add delays that will either add gaps (positive delay) or overlaps (negative delay).
Check out this example to see the keyframes syntax and the use of delays to overlap and add gaps in the animation.
See the Pen GreenSock Keyframes by Christina Gorton (@cgorton) on CodePen.dark
MotionPath and MotionPath helper plugin
One of the features I am most excited about is MotionPathPlugin and the MotionPathHelper. In the past I used MorphSVGPlugin.pathDataToBezier to animate objects along a path. Here is an example of that plugin:
See the Pen MorphSVGPlugin.pathDataToBezier with StaggerTo and Timeline by Christina Gorton (@cgorton) on CodePen.dark
But the MotionPathPlugin makes it even easier to animate objects along a path. You can create a path for your elements in two ways:
- With an SVG path you create
- Or with manual points you define in your JavaScript
The previous Quidditch pen I shared uses MotionPathPlugin in several places. First you need to register it like this:
//register the plugin gsap.registerPlugin(MotionPathPlugin);
Note: the MotionPathHelper plugin is a premium feature of GreenSock and is available to Club GreenSock members but you can try it out for free on CodePen.
I used an SVG editor to create the paths in the Quidditch animation and then I was able to tweak them directly in the browser with the MotionPathHelper! The code needed to add the MotionPathHelper is this
MotionPathHelper.create(element)
I then clicked "COPY MOTION PATH" and saved the results in variables that get passed to my animation(s).
Paths created with the MotionPathPlugin helper
const path = "M-493.14983,-113.51116 C-380.07417,-87.16916 -266.9985,-60.82716 -153.92283,-34.48516 -12.11783,-77.91982 129.68717,-121.35449 271.49217,-164.78916 203.45853,-70.96417 186.21594,-72.24109 90.84294,-69.64709 ", path2 ="M86.19294,-70.86509 C64.53494,-36.48609 45.53694,-13.87709 -8.66106,-8.17509 -23.66506,-40.23009 -30.84506,-44.94009 -30.21406,-88.73909 6.79594,-123.26109 54.23713,-91.33418 89.94877,-68.52617 83.65113,-3.48218 111.21194,-17.94209 114.05694,18.45191 164.08394,33.81091 172.43213,34.87082 217.26913,22.87582 220.68213,-118.72918 95.09713,-364.56718 98.52813,-506.18118 ", path3 = "M-82.69499,-40.08529 C-7.94199,18.80104 66.81101,77.68738 141.56401,136.57371 238.08201,95.81004 334.60001,55.04638 431.11801,14.28271 ", path4 = "M126.51311,118.06986 C29.76678,41.59186 -66.97956,-34.88614 -163.72589,-111.36414 -250.07922,-59.10714 -336.43256,-6.85014 -422.78589,45.40686 ";
Example of a path passed in to animation
const hover = (rider, path) => { let tl = new gsap.timeline(); tl.to(rider, { duration: 1, ease: "rider", motionPath:{ path: path, } }) return tl }
In this timeline I set up arguments for the rider and the path so I could make it reusable. I add which rider and which path I want the rider to follow in my master timeline.
.add(hover("#cho", path3),'start+=0.1') .add(hover("#harry", path4),'start+=0.1')
If you want to see the paths and play around with the helper plugin you can uncomment the code at the bottom of the JavaScript file in this pen:
See the Pen Quidditch motionPath by Christina Gorton (@cgorton) on CodePen.dark
Or, in this pen you can check out the path the wand is animating on:
See the Pen MotionPath GreenSock v3 by Christina Gorton (@cgorton) on CodePen.dark
Effects
According the the GreenSock Docs:
Effects make it easy for anyone to author custom animation code wrapped in a function (which accepts targets and a config object) and then associate it with a specific name so that it can be called anytime with new targets and configurations
So if you create and register an effect you reuse it throughout your codebase.
In this example I created a simple effect that makes the target "grow". I create the effect once and can now apply it to any element I want to animate. In this case I apply it to all the elements with the class ".box"
See the Pen GreenSock Effects by Christina Gorton (@cgorton) on CodePen.dark
Utility methods
Lastly, I'll cover the utility methods which I have yet to explore extensively but they are touted as a way to help save you time and accomplish various tasks that are common with animation.
For example, you can feed any two similarly-typed values (numbers, colors, complex strings, arrays, even multi-property objects) into the gsap.utils.interpolate() method along with a progress value between 0 and 1 (where 0.5 is halfway) and it'll return the interpolated value accordingly. Or select a random() value within an array or within a specific range, optionally snapping to whatever increment you want.
Most of the 15 utility methods that can be used separately, combined, or plugged directly into animations. Check out the docs for details.
Below I set up one simple example using the distribute() utility which:
Distributes an amount across the elements in an array according to various configuration options. Internally, it’s what advanced staggers use, but you can apply it for any value. It essentially assigns values based on the element’s position in the array (or in a grid)
See the Pen GreenSock Utility Methods by Christina Gorton (@cgorton) on CodePen.dark
For an even more impressive example check out Craig Roblewsky's pen that uses the distribute() and wrap() utility methods along with several other GSAP 3 features like MotionPathPlugin:
See the Pen MotionPath Distribute GSAP 3.0 by Craig Roblewsky (@PointC) on CodePen.dark
That wraps up the features we wanted to cover in this article. For the full list of changes and features check out this page and the GreenSock docs. If you'd like to know what old v2 code isn't compatible with v3, see GreenSock's list. But there's not much as GSAP 3 is surprisingly backward-compatible given all the improvements and changes.
References
All of the Pens from this article can be found in this collection.
For more examples check out GreenSock's Showcase and Featured GSAP 3 Pens collection.
Can i use cubic-bezier function? | https://tympanus.net/codrops/2019/11/14/the-new-features-of-gsap-3/ | CC-MAIN-2020-29 | refinedweb | 2,388 | 63.7 |
I did it - I finished my final project for Flatiron School!! It is a simple Symptom Tracking app built with Ruby on Rails backend and React frontend. This was difficult to do, partially because I found React to be the most challenging thing we've learned in Flatiron, and partially because I was doing it after sustaining an injury (my own symptom journal during this time was the inspiration for the app - I took my little notebook and made it digital!)
Despite React being challenging to learn (in my opinion), it is a lot of fun once you get over the learning curve. React is a JavaScript library and a powerful tool for building SPAs. It relies on state management and rendering it to the DOM. In my app, I also used Redux. Redux is a way to store and interact with state and allow the data to be manipulated and passed between components.
Here are some useful graphics that helped me understand React and Redux:
Here is an example of the way state is used in my project. This is from the Form component:
class SymptomsForm extends Component { state = { title: "", severity: "", notes: "", }; handleChange = (e) => { const { name, value } = e.target; this.setState({ \[name\]: value, }); }; handleSubmit = (e) => { e.preventDefault(); this.props.addSymptom(this.state); this.setState({ title: "", severity: "", notes: "", }); if ( this.state\["title"\] !== "" && this.state\["severity"\] !== "" && this.state\["notes"\] !== "" ) { this.props.history.push("/"); } };
This is also where Redux comes in:
State is within an object tree inside of store. In order to change that state tree, an action must be used (an object), and that action must be dispatched to the store. Dispatching requires using reducer functions. This is an example from my project of those both look:
The action which creates the symptom after the form is filled out and the user hits submit:
export const addSymptom = (symptom) => { return (dispatch) => { fetch("", { method: "POST", body: JSON.stringify(symptom), headers: { "Content-Type": "application/json" }, }) .then((res) => { if (res.status === 422) { alert("Please fill out all fields"); } return res.json(); }) .then((symptoms) => dispatch({ type: "ADD\_SYMPTOMS", payload: symptoms }) ); }; };
Reducer:
export const symptomsReducer = (state = \[\], action) => { switch (action.type) { // case 'FETCH\_SYMPTOMS': // return action.payload; case 'ADD\_SYMPTOMS': return \[...state, action.payload\]; // case 'DELETE\_SYMPTOM': // return \[ // ...state.filter(item => item.id !== action.payload) // \]; default: return state; } };
The switch statement here allows the program to determine which function should be executed based on type. I commented out the other functions to show what the reducer would look like only with the addSymptom action. The reducer sees that the action was taken, and returns the state accordingly. Payload is basically just the data in the action object.
Ultimately, I think React is great tool, and I definitely plan to expand this project. I want to add a user auth, and a heat map calendar like the GitHub one to reflect entries. Stay tuned! For now, here are the links to this project:
Discussion (12)
May I know which tool you used for the gif ?
sure! I embedded it with this:
I think he was asking for the tool you used to create the gif :-)
As Natalie hasn't responded, I'll suggest that it was an existing stock animated gif (i.imgur.com/jl9CSOO.gif). She didn't actually claim to have created the gif!
Great job! I can tell you that React itself was probably not hard to learn, it was probably redux that made it challenging. I also learned redux with React at the same time when I first started using it, and it made things a lot harder. I would say try another state management strategy, like context. It's a lot easier to understand in the beginning.
Redux and the strategy of async messaging is useful to know though, it helped me to understand microservices and automation strategies later on down the road.
Maybe createSlice method from redux-toolkit is easier than normal redux...
Congrats on successfully finishing your program!
thanks! :)
Congrats on your final project! Pretty damn cool 💯
Great
That Redux graphic is so neat! Why are all the square brackets escaped?
Did they force you to use redux or is it your choice?
I mean redux is so bad, i'd never use it in my pet projects | https://practicaldev-herokuapp-com.global.ssl.fastly.net/stuxnat/final-react-project-2poi | CC-MAIN-2022-33 | refinedweb | 707 | 64.71 |
On 10/24/2014 10:41 PM, Mike Frysinger wrote: > On 24 Oct 2014 22:08, ricaljasan wrote: >> Chapter; > > it isn't used though. if you read the file: > #ifndef errno > extern int errno; > #endif So this is like a "default", and not volatile. > > and Linux systems do: > extern int *__errno_location (void) __THROW __attribute__ ((__const__)); > #define errno (*__errno_location ()) > -mike > Unfortunately, I'm at the start of the learning curve, so I'm having a hard time seeing where the above declaration results in 'volatile'. I get that errno is aliased to __errno_location (for lack of a better term - maybe "defined as the value that the dereferenced return value of __errno_location gives" is more correct), but the right half of the declaration is a little new to me. I read it as the declaration of the address of an int returned by a function __errno_location which doesn't take any arguments and ...what? It throws a constant attribute? Sounds like a temper-tantrum from hell. Searching for "C declarations with attributes after the function" yielded this link: which makes it sound like the const attribute is the opposite of what I understood the volatile keyword to mean (largely garnered from const means the compiler can reduce/optimize the code, volatile means it shouldn't, because the value can change between calls, despite what it otherwise thought. To reconcile volatility, I thought maybe __THROW was a negation (as in "throw away the const attribute"), but from what I can see it seems to be a way of saying a C function won't throw an exception, a C++ function will, or nothing at all. From misc/sys/cdefs.h: #ifdef __GNUC__ ... # if !defined __cplusplus && __GNUC_PREREQ (3, 3) # define __THROW __attribute__ ((__nothrow__ __LEAF)) ... # else # if defined __cplusplus && __GNUC_PREREQ (2,8) # define __THROW throw () ... # else # define __THROW So the fallback declaration of errno is as an int and a Linux system defines it as a macro which expands to the dereferenced return value of a function which returns the address of an int (errno, actually, which seems a bit... recursive; see csu/errno-loc.c), has a const attribute, and may or may not throw an exception depending on whether this is C or C++. Linux systems actually appear to only do that conditionally (using sysdeps/unix/sysv/linux/bits/errno.h, but hppa/alpha/mips/sparc under linux do similarly): #ifdef _ERRNO_H ... # ()) The above is done unconditionally in sysdeps/mach/hurd/bits/errno.h. Beyond that, the only other definitions of errno I see are to rtld_errno, __libc_errno, and errno in include/errno.h. I'm still failing to see how the manual is correct in stating errno is declared volatile. This is about the extent of my sleuthing, as I'm now wondering if I must be mistaken in my understanding of volatile and const. As much as I love chasing rabbits down their holes, it's probably best if I wait for a nudge in the right direction now. You can see why I've only begun submitting patches for grammar in the manual. I know I have a lot to learn about glibc (and C, period, which is what originally drew me to reading the manual). This topic came out of vetting the prototype definitions in the manual. I have a number of other things I also need to sift through because the source and manual don't quite seem to line up, but I need to get a little more acquainted with the landscape of glibc so I can have more surety in my ability to follow the mazes of dependencies, declarations, definitions, and what-have-you. Thank you for the help, Rical Jasan | https://sourceware.org/pipermail/libc-help/2014-October/003243.html | CC-MAIN-2022-21 | refinedweb | 615 | 58.42 |
Type: Posts; User: Sirjorj
I sure would like to get a fresh copy of IE 5.5 for Win 95!
My copy has become very nervous and flighty. Like with the Error messages and loading multiple copies and misbehaving generally.
Anyone...
Thank you folks for suggestions.
The problem is resolved. Thought you might like to know how I found the solution. I happened upon a Shareware distribution site and found "ZILLA Tuneup and...
I lost a system file and my Win95 OS is very unhappy and that makes me unhappy as well.
Anyone have or knows where I can get a copy of WS2_32.DLL ?
Seems like I lost a System file and Win95 complains about it everytime it loads. The file is: WS2_32.DLL.
Anyone know where I could find a copy? Tried MSN-no luck.
:(
I would think there is a library function to find the number of elements in an array of pointers- such as:
char xx[] = {"abc", "aaoo", "bb", ......."x"}
but at the moment am stumped.
:(
[QUOTE]Originally posted by wayside
I don't understand what you mean by the "file address", this makes no sense to me.
Yes you are right with regard to the SAMPLE code I submitted, I could read...
[QUOTE]Originally posted by wayside
yy=(int)aa;
You are setting yy to be equal to a pointer to aa, not the bytes that are in aa. You would need to do something like
'Casue I don't know the file address for yy but I do know its location in a buffer that had been read.
So I must handle it from that location.
:)
No- its not too late!
Sorry folks I made a mistake in setting up an example situation of the real problem. The integer is actually being read in as an element of a larger char block.
A more correct...
Hi Boys 'n Girls-
Kin someone point out the problem with this code?
#include "stdafx.h"
#include <stdio.h>
int main(int argc, char* argv[])
{
FILE *Afile;
int xx=123456,yy;
Thanks all,
You have given me some ideas to pursue.
:wave:
In answer to GCDEF's question- This would be a one-time thing.
I need the number that is associated with each name then the source file is scrap. And, Oh yes, the size of the source file is 120k...
Thank you Adam for your reply. However, I'm not concerned with a DataBase Management program and I'm not familiar with ADO and was hoping for a simpler approach. The records, I might add, are already...
I have a very large text file containing records that consist of a NULL terminated Name (vartiable length) followed by an Int. I need a routine which will find a particular record for a given name....
wow! This silence is deafening
Hard to believe that no one here knows the answer.
Maybe someone at least knows the prototype or has
had some experience with its...
I've looked everywhere I can think of and searched MSDN for "std::sort" but came up empty.
Anyone know where BIll Gates put the documentation on that fellow?
Thanks, if you kin help! :)
I realize that, strictly speaking, this is the wrong forum for my question but, my experience is that it is the only place that I can get answers.
Something has gone awry with my boot sequence...
Thinking 'bout replacing my old puter with a new one :(
But I wonder if XP or 2000 (I figgur one or the other are bundled) support DOS. It would pose a problem if I had to give up DOS.
TIA :D
=====> MERRY CHRISTMAS and a HAPPY NEW YEAR to ALL. <=====
August 12
Moved to our new home in Michigan. It is so wonderful here; Lake Michigan is magnificent. Can hardly wait to see snow on the...
After doing the dumb thing of turning off my puter, I ran into a strange problem when I turned it on again. This is Windows 95.
It failed to boot with the error msg: "CD-ROM drive #0 found on 170h...
This is a "Thank You" note to all on this forum that have helped me in the past. I have asked a lot of questions and received much help and I think it important that you are recognition from time to...
I would like to build a Web Site but don't have a clue about how to go about it. I do need suggestions, tips and maybe sites to check out. I have seen many ads about hosting sites for under...
I would like to disable some (or all) TabStops in a dialog box.
the following:
dlg.m_MyWord.TabStop=FALSE;
produces the error: TabStop is not a member of CString.
where m_MyWord identifies a...
I am looking for a few folks who would be willing to act as Beta testers for an English dictionary program I will be releasing soon. If you would like to do this you may eMail me at:...
OK Dave-
I now understand better what 'focus' is.
Thank you very much. | http://forums.codeguru.com/search.php?s=215a3b3606c28a3da8b8d4e325a4424d&searchid=7000651 | CC-MAIN-2015-22 | refinedweb | 848 | 84.07 |
Talk:Displaying Inheritance Hierarchy in the Graph Window
If you have the Type List grouped by File, Namespace or Custom Group, you can select the File, Namespace or Custom Group node and the Graph view will display the hierarchical relationship of the classes declared in the corresponding file, namespace or custom group.
NOTE: The Type List allows you to multi select nodes of the same level except when you're in hierarchy mode. In hierarchy mode you can multi select nodes at any level. So certain graphs are easier to generate in one grouping that another. For example, if you want to create a graph of just classes in your project, it's best to activate Custom grouping. And if you want to create a graph of classes from two files or two namespaces, it's best to activate 'Group by Files' and 'Group by Namespaces' respectively. Babetb
Response
Thank you for your comments! I've incorporated them into the help topic. KrisHouser 11:39, 30 October 2009 (PDT) | http://docwiki.embarcadero.com/RADStudio/Rio/en/Talk:Displaying_Inheritance_Hierarchy_in_the_Graph_Window | CC-MAIN-2020-05 | refinedweb | 169 | 61.56 |
Our application will associate sites with tags (many to many relationship), like delicious does, but in a much simplified manner. For instance, delicious keeps tracks of which user gave which tag to which URL. We will only associate sites with tags. But it will be very easy to add this functionality later.
We’ll quickstart a new project (the -s argument tells tg-admin that the project will use SQLAlchemy and not SQLObject)
Enter package name [tags]:
Do you need Identity (usernames/passwords) in this project? [no] yes
Defining The Model
We are going to have a table for the sites, a table for the tags and a table that associates sites with tags (many-to-many). Here’s the code which defines the tables (which goes in model.py):
Column(‘site_id’, Integer, primary_key=True),
Column(‘title’, Unicode(256)),
Column(‘url’, Unicode(1024)),
)
tags_table = Table(‘tags’, metadata,
Column(‘tag_id’, Integer, primary_key=True),
Column(‘name’, Unicode(32), index=‘tag_idx’))
sites_tags_table = Table(‘sites_tags’, metadata,
Column(‘site_id’, Integer,
ForeignKey(‘sites.site_id’),
primary_key=True),
Column("tag_id", Integer,
ForeignKey(‘tags.tag_id’),
primary_key=True))
We will now create the Python classes that correspond to these tables:
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def link(self):
return "/tags/"+self.name
class Site(object):
def __init__(self, url, title):
self.url, self.title = url, title
Note the
link() method in the
Tag class. You might wonder what it does there. It just a little habbit that I wanted to share with you. I’ve found myself many times hard-coding URLs inside my templates. Then, if you want to make a tag linkable in many different places in your app, you have to hard-code the link every time. In this way, you can just pass the tag object to your template and do something like:
Ok, now we continue with mapping the classes to the tables:
mapper(Site, sites_table, properties = {
‘tags’: relation(Tag, secondary=sites_tags_table, lazy=False),
})
Great. Now we can construct the database and start populating it:
$ tg-admin shell
…
>>> g = Site(‘’, ‘Search engine’)
>>> g.tags
[]
>>> g.tags = [Tag(‘search’), Tag(‘google’)]
>>> session.save(g)
>>> session.flush()
(here SQLAlchemy echos the SQL statements it executes)
Handling tags
So we got the model right. The next step is to allow the users to provide tags for the site. The easiest way (for you and your users) is to ask them to enter the tags in a space-separated list. Suppose you are given this kind of space-seperated string of tags from a user, then you have to:
- convert all tags to lower case, in order to avoid case senstivity issues
- check if the string contains the same tag twice
- find which tags are already in the database and which are new
- recover from some nonsense that users might throw at you
and then get a list of Tag objects that you can assign to a site. So here’s a function that does just that:
"""Get a string of space sperated tag,
and returns a list of tag objects"""
result = []
tags = tags.replace(‘;’,‘ ‘).replace(‘,’,‘ ‘)
tags = [tag.lower() for tag in tags.split()]
tags = set(tags) # no duplicates!
if ” in tags:
tags.remove(”)
for tag in tags:
tag = tag.lower()
tagobj = session.query(Tag).selectfirst_by(name=tag)
if tagobj is None:
tagobj = Tag(name=tag)
result.append(tagobj)
return result
So you can now easily do something like:
>>> f.tags = get_tag_list(‘photo sharing photograpy’)
>>> f.tags
[photo, sharing, photograpy]
>>> f.tags[0].link()
‘/tags/photo’
Tag Search
It is straightforward to just list a site together with its tags:
<p class="site-tags">Tags:
<a py:${tag.name}</a>
</p>
Search is a bit more tricky. It took me few attempts until I got the search queries right. Here’s how to fetch all sites that are tagged by ‘google’:
sites = q.select((Tag.c.name==‘google’) & q.join_to(‘tags’))
the magic is mostly inside the join_to method – it stands for the SQL statements that makes sure that the Tag clause is associated to the sites. Without it, the query runs over the entire cartesian product of Sites x Tags.
You can make the query simpler (for MySQL; not you), if you fetch the tag_id of ‘google’ first. Then, the query uses only 2 of the 3 tables:
if not tagobj:
raise cherrypy.InternalRedirect(‘/notfound’)
sites = session.query(Site).select((sites_tags_table.c.tag_id == tagobj.tag_id) &
(sites_tags_table.c.site_id == Site.c.site_id))
To search for
google|photo:
sites = q.select(
Tag.c.name.in_(‘google’, ‘photo’) &
q.join_to(‘tags’))
To search for
sharing+photos:
sites = q.select(
Tag.c.name.in_(‘sharing’,‘photos’) &
q.join_to(‘tags’),
group_by=[Site.c.site_id],
having=(func.count(Site.c.site_id)==2))
The idea is that sites that are tagged both with ‘sharing’ and ‘photos’ will appear twice in the select, then after grouping by site_id and getting all which appear twice, we get the desired result.
There are many more things that can be done from this point, like: associating with the tag-site relationship which user added the tag, rendering a tag cloud and so on. Feel free to leave comments!
Thanks! Very helpful.
You say “To search for google+photo” but then you do a search for ’sharing’ and ‘photos’…
Thanks Damjan. Post updated.
Also, you use `session.query(Site)’ often but also you use `q.’ which is assumed to be from the first `q = session.query(Site)’ … it’s a bit confusing. It’s possible to just use `q’ always right?
Can’t the page.tags be a list of normal strings? Because currently it’s a list of Tag instances.
For ex. if I wish for ‘photos’ in page.tags to work I had to implement this method in the Tag class:
def __cmp__(self, other):
if isinstance(other, Tag):
return cmp(self.name, other.name)
elif isinstance(other, basestring):
return cmp(self.name, other)
else:
return cmp(self, other)
Damjan, you can add to the Site class a property that will give you the tags as a list of strings:
One comment and a question:
Comment: The quotes you use in the examples seem to be forward quote and backward quote, which when I paste into TextMate on my Macintosh give a syntax error and I need to replace the quotes.
Example:
I change: ‘google’
to: ‘google’
Question:
When I do the:
$ tg-admin sql create
I get an error:
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 149, in __call__
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 31, in _init_items
File “build/bdist.macosx-10.3-fat/egg/sqlalchemy/schema.py”, line 449, in _set_parent
sqlalchemy.exceptions.ArgumentError: The ‘index’ keyword argument on Column is boolean only. To create indexes with a specific name, append an explicit Index object to the Table’s list of elements.
it is caused by the 3rd line below:
tags_table = Table(“tags”, metadata,
Column(‘tag_id’, Integer, primary_key=True),
Column(‘name’, Unicode(32), index=’tag_idx’))
I am running with MySQL database.
I can use the tutorial without specifying the index or index name, but it would be useful.
Thanks.
Pingback: sites poker en ligne
Finance and treasury experts can take the Certified Treasury Professional Exam from Morgan International before taking the certification. Passing the first time is key.
I see you share interesting stuff here, you
can earn some additional money, your blog has huge
This post is on 13 spot in google’s search results, if you want
more visitors, you should build more backlinks to your articles, there is
one trick to get free, hidden backlinks from authority forums, search on youtube:
how to get hidden backlinks from forums
This post will assaist the internet viewers for creating new website or even a blog from start to end.
I was extremely pleased to discover this great site. I want to to thank you for your time for this
fantastic read!! I definitely enjoyed every little bit of it and i also hsve you saved ass a favorite to loook at new information in your web site.
Want to train muscles fast and become huge?
No need to wait months and months, get the best Muscle Boosting BodyBuilding Supplements!
Try it today muscles Supplements
FEEL YOUNG AGAIN!
Restore your youthful energy – Use Creatine that ups your training and helps to build muscles fast!
I have noticed you don’t monetize thesamet.com, don’t waste your traffic, you can earn extra cash every month with new monetization method.
This is the best adsense alternative for any type of website (they approve all
websites), for more details simply search in gooogle: murgrabia’s tools | http://www.thesamet.com/blog/2006/11/17/tutorial-how-to-implement-tagging-with-turbogears-and-sqlalchemy/ | CC-MAIN-2020-16 | refinedweb | 1,448 | 65.01 |
Close to leaving, I decided to make up for the knowledge of C++11 that I had never learned before. Suddenly, I turned to exception handling. I felt it was a little fun and wrote a test program myself. Then the three views were completely subverted.
The source code is as follows:
#include <iostream> #include <string> #include <exception> void speak(int i) { if(i <= 0) { throw "System get a wrong..."; } } void main() { try { speak(-1); } catch(std::exception e) { std::cerr << "exception info:" << e.what() << std::endl; } }
It's very simple. It's right. It's also right to compile. Then it runs and hangs. Through the Debug trace, it's found that such a line hangs
catch(std::exception e), and then all kinds of doubts, all kinds of changes, it's just a matter of reloading the system.
Unintentionally changed this sentence, and then handled the exception by yourself. The changed code is as follows:
#include <iostream> #include <string> #include <exception> void speak(int i) { if(i <= 0) { throw std::exception("System get a wrong..."); } } void main() { try { speak(-1); } catch(std::exception e) { std::cerr << "exception info:" << e.what() << std::endl; } }
Pay attention to throw. It's silly. Maybe exception is an explicit class. There's not much Kung Fu research. Just remember. The reason why the program hangs up may be that it can't be captured here, and it is directly handed over to the operating system.
==================Supplementary notes===================
I posted the statement of the exception class I said in the morning below. I ran into problems under different platforms:
_STDEXT_BEGIN class exception { // base of all library exceptions public: static _STD _Prhand _Set_raise_handler(_STD _Prhand _Pnew) { // register a handler for _Raise calls const _STD _Prhand _Pold = _STD _Raise_handler; _STD _Raise_handler = _Pnew; return (_Pold); } ====Here's why char *Cannot automatically convert to exception Why? // this constructor is necessary to compile // successfully header new for _HAS_EXCEPTIONS==0 scenario explicit __CLR_OR_THIS_CALL exception(const char *_Message = _MESG("unknown"), int x=1) _THROW0() : _Ptr(_Message) { // construct from message string (void)x; } __CLR_OR_THIS_CALL exception(const exception& _Right) _THROW0() : _Ptr(_Right._Ptr) { // construct by copying _Right } exception& __CLR_OR_THIS_CALL operator=(const exception& _Right) _THROW0() { // assign _Right _Ptr = _Right._Ptr; return (*this); } virtual __CLR_OR_THIS_CALL ~exception() { // destroy the object } virtual const char * __CLR_OR_THIS_CALL what() const _THROW0() { // return pointer to message string return (_Ptr != 0 ? _Ptr : "unknown exception"); } void __CLR_OR_THIS_CALL _Raise() const { // raise the exception if (_STD _Raise_handler != 0) (*_STD _Raise_handler)(*this); // call raise handler if present _Doraise(); // call the protected virtual _RAISE(*this); // raise this exception } protected: virtual void __CLR_OR_THIS_CALL _Doraise() const { // perform class-specific exception handling } protected: const char *_Ptr; // the message pointer }; _STDEXT_END
Another problem I have encountered is that in gunc, exception does not have a constructor with char. To construct an exception through char, only the subclass with this constructor can be used. However, on windows platform, exception has this constructor. Just remember here. | https://programmer.group/c-exception-throwing-and-catching.html | CC-MAIN-2019-47 | refinedweb | 483 | 55.03 |
import socket import sys import datetime import os try: username = "root" password = "Apacheah64" db_name = "DB_GPS" table_name = "Tbl_GPS" host = "" port = 6903 buf = 4096 except IndexError: sys.exit(1) s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.bind((host, port)) while 1: data = s.recv(buf) if not data: print("Client has exited!") break else: print("\nReceived message '", data,"'") # Close socket s.close()
the bytes i m received should be 43 bytes, but what i received from client is
Received message ' b'\x0f\x00\x00\x00NR09G05164\x00' ' ? only 15 bytes. why?
Below is Original Bytes 43 bytes
00 00 00 01 00 06 ec 44 76 a6 21 c2 00 00 08 00 45 00 00 2b 08 43 00 00 34 11 81 2b cb 52 50 db 67 0d 7a 19 24 2d 1a f7 00 17 83 26 0f 00 00 00 4e 52 30 39 47 30 35 31 36 34 00 | https://community.esri.com/thread/66133-python-bytes-are-missing-after-recv-from-udp-socket | CC-MAIN-2019-18 | refinedweb | 154 | 79.6 |
This site is very useful to me and I request u to
This site is very useful to me and I request u to give more information.
Thanks and regards,
neeraja
Question: What method is used to specify a contain
Question: What method is used to specify a container's layout?
Answer: The setLayout() method is used to specify a container's layout.
Question: What method is used to specify a container's layout?
Answer: The setLayout() method is used to spe
This site is very useful not only for and also som
This site is very useful not only for and also somany members,which are preparing for interviews and I request u to give more information.
super
this site is very very useful to java developers and also who are facing the intervies thanks to roseindia.com thanks a lot u.
AWESOME
THE SITE IS REALLY AWESOME
struts
I want to know why *.do is written in web.xml in
<url-pattern>*.do</url-pattern>
why not some *.jsp or some other thing........
Also i want to know what are the config files we will have while running in jboss...
how to make .exe file from class file
hi sir
plz send me method to generate .exe file from class file or how to make any class file platefrom dependent...
sardhara ravin
9860195427
Compliment
This site is good for both beginners and experience java professional
advanced java
ITis very usefull 4 computer students .i wantadvanced java 4 refernce
This Site is really a spoonfeed
nice to have Rose india as my colleague in my job
any suggestion on how to make .exe file using App
Sir / Mam
This is sagar iam a student of APTECH and doing a Course presently iam our topic is on Core Java and i have created small programm in Core Java using Applet, but i need help how to get .Exe file in core any suggestion pls pls pls pls mak
Create Exe File
As i am done project in struts can you please help me how to create the executable file in struts please help me
project
I am doing my degree project
in J2EE.So i need ur help.i am doing in jsp.i need jsp tools and what are lesson i want to study in jsp my project based.
about javainterview questions
hi...this site so nice...........if possible send new things to my email id......any way thanku.......
Java Program
I doing a project in CoreJava.
I want the code of ..
If I click on one button in one frame.I want to display next Fame.
jspbook
jspbook
Java Interview Questions
This question are very important to facing the interview.
good
this is good service
java
satisfactory
struts ebook
i need a struts ebook
plz give me..............
I want to know about Interview Questions
Questions of different companies
java question answer
plz send the java question answer
java - Java Interview Questions
java hello sir this is suraj .i wanna ask u regarding interview.../interviewquestions/
Here you will get lot of interview questions... difficult 2 answer....so could u plz suggest me with sme information regarding tis
code and specification u asked - Java Beginners
code and specification u asked you asked me to send the requirements... found including a browser can be very useful, example. The user browse...();
display.dispose();
}
}
so i have sent u the code
Thank U - Java Beginners
Thank U Thank U very Much Sir,Its Very Very Useful for Me.
From SUSHANT
Java - Java Interview Questions
Java HI, This is Ramesh.
can u please tell me the difference... on web servers.
3)An applet have GUI whereas servlets don't.
4)Applets are useful to develop the static web pages whereas Servlets are useful to develop
java questions - Java Interview Questions
java questions HI ALL ,
how are all of u??
Plz send me the paths of java core questions and answers pdfs or interview questions pdfs or ebooks :)
please favor me best books of interviews questions for craking
help me in these - Java Interview Questions
help me in these hello every body
i have some question if you cam plz answer me it is important to me
and these are the questions :
1)Write...[]=st.toCharArray();
for(int i=0;i 4)
import java.util.*;
public class
Collection of Large Number of Java Interview Questions!
Interview Questions - Large Number of Java Interview Questions
Here you....
The Core Java Interview Questions
More interview questions on core Java.. Read
Java - Java Interview Questions
Interview interview, Tech are c++, java, Hi friend,
Now get more...Java Respected Sir/Madam
First i can introduce my self, Am Ashok Kumar C from Tamilnadu, I am going to finish Master Degree In Computer
India website.
Index |
Ask
Questions | Site
Map
Web Services...
| Awt
Tutorials | Java Certification
| Interview Question ...
Installation Guide |
Ask
Questions | Java Q&As
Web Hosting Services | http://www.roseindia.net/tutorialhelp/allcomments/50 | CC-MAIN-2015-11 | refinedweb | 817 | 65.73 |
Anna Ullrich's blog, devoted to art, creativity, front end web design and development, and Microsoft Expression Web
As you've surely discovered by now, in the top right corner of Internet Explorer 7 and Firefox is a text box you can use to search the web:
If you type something in that box and press ENTER, the browser sends the text you entered to your default search engine, such as Google or Live Search. That's pretty straight forward.
But what you may not have noticed is the drop down menu on the right side of the text box:
If you click the down pointing arrow (not the magnifying glass mind you), you get a drop down menu that enables you to select particular websites to search with that text box. For example, you may be able to select Wikipedia, Ebay, Amazon, etc. Here's my list of search providers:
And more and more websites are providing users with additional search options. For example, when you visit Godaddy.com, their homepage presents you with the option to add GoDaddy search to your browser:
I've been wanting to add the World Wide Web Consortium (W3C) website to the list of search options so that I can quickly look up HTML or CSS properties:
I have that website bookmarked, as well as the CSS specs and the HTML specs, and there are search queries you can use to target a particular website, but I've always thought it would be more convenient to do a simple search from that text box at the top of the browser.
Well ask and you shall receive! I emailed all the developers on the Expression Web team and one of the them, Tomas, set me up. So I'm here to share the love with y'all.
We're giving you three options for searching the W3C website, using either Google or Live Search, and you can add as many of these as you'd like. These search options were tested in Internet Explorer 7 and Firefox:
To add any of these search options to your browser:
A page opens in a new browser window.
You may notice that above the Add Search Providers submenu in IE7 are the three new Google (or Live Search) W3C search options. If you click one of those, you are not adding the option to your browser, instead your are simply selecting a search option for one time use from my page. So be sure to point to one of the options under Add Search Providers.
And that's it. For this example, I selected Google Entire W3C, and now that option appears in the list of search options:
Is that useful to you? Let me know what you think!
So how did Tomas accomplish that? Here's how, in his own words:
There is an OpenSearch standard for doing this. Technically, you need a short static XML document somewhere on the web server that describes your search engine. For example, this is the file for English Wikipedia:
<?xml version="1.0"?>
<OpenSearchDescription xmlns="">
<ShortName>Wikipedia (en)</ShortName>
<Description>Wikipedia (en)</Description>
<Image height="16" width="16" type="image/x-icon"></Image>
<Url type="text/html" method="get" template="{searchTerms}"/>
<Url type="application/x-suggestions+json" method="GET" template="{searchTerms}&namespace=0"/>
</OpenSearchDescription>
Browsers have an “autodiscovery” mechanism – you provide a <link> on your page (in <head>) and IE7/FF2 should do something visually to give you a hint that you have a search provider.
<link rel="search" href=
Optionally, you can provide a little JavaScript on your page to add a search provider – this way you can create your own active links, buttons or ads directly on your page.
window.external.AddSearchProvider("");
Take a look at the best practices for other tips and hints.
Tom
I didn't realize you were an MS employee -- I'll hopefully get to meet you at the next Eastside photo meetup... and maybe convince you to help me get going with Expression :) | http://blogs.msdn.com/anna/archive/2008/05/11/add-browser-shortcuts-for-searching-the-w3c.aspx | crawl-002 | refinedweb | 671 | 64.75 |
systemjs-plugin-htmlsystemjs-plugin-html
Bridging the gap between SystemJS (Universal dynamic module loader) and HTML imports. Load HTML imports via SystemJS, include SystemJS modules (ES6, AMD, CommonJS) in HTML Imports, building using vulcanize via jspm and SystemJS builder.
Watch out: This project is an experiment. The HTML imports specification is still in flux and SystemJS is still new.
InstallInstall
jspm install html=github:Hypercubed/systemjs-plugin-html
Basic usageBasic usage
import './dom-element.html!'
The html file is imported as an HTML import. The webcomponent.js polyfills may be required in browsers that lack native support. The code above is equivalent to:
<link rel="import" href="./dom-element.html">
See examples in the test folder
BundlingBundling
Bundling of html files is done using Vulcanize. When bundling an
build.html will be created along side the
build.js file. You can disable html imports bundling by setting
System.buildHTML = false.
TestsTests
Testing using karma:
karma start
or use and navigate to
Tested with:
- JSPM v0.15.7
- SystemJS v0.16.11
Tested in:
- Chrome Version 43.0.2357.132 m
- IE 11.0.9600
- Firefox 39.0
- Firefox 41.0a2
LicenseLicense
MIT | https://www.npmjs.com/package/systemjs-plugin-html | CC-MAIN-2022-21 | refinedweb | 192 | 54.39 |
Auto import
When you reference a class that has not been imported, PyCharm helps you locate this file and add it to the list of imports. You can import a single class or an entire package, depending on your settings.
The import statement is added to the imports section, but the caret does not move from the current position, and your current editing session does not suspend. This feature is known as the Import Assistant.
The same possibility applies to XML files. When you type a tag with an unbound namespace, the import assistant suggests to create a namespace and offers a list of appropriate choices.
Creating imports on the fly
Import packages on-the-fly
Start typing a name in the editor. If the name references a class that has not been imported, the following prompt appears:
The unresolved references will be underlined, and you will have to invoke intention action Add import explicitly.
Press Alt+Enter. If there are multiple choices, select the desired import from the list.
You can define your preferred import style for Python code by using the following options available on the Auto Import page of the project settings ( ):
Optimizing imports
Sooner or later, some of the imported classes or packages become redundant to the code. PyCharm.
Besides cleaning the code from the unused imports, PyCharm formats the existing import statements according to the Style Guide for Python Code. In doing so, PyCharm splits import statements into separate lines, and sorts them into groups (refer to the Imports section for details).
Also, imports are sorted alphabetically and case-sensitively within the respective groups:
You can modify the sorting policy in the Import tab of the Python code style settings ( ). See Python Code Style Settings for more information.
Optimize imports in the entire project
Switch the focus to the Project tool window and do one of the following:
From the main menu, choose.
Press Ctrl+Alt+O.
The Optimize Imports dialog
From the main menu, choose.
Press Ctrl+Alt+O.
Place the caret at the import statements, click
, and choose Remove unused import.
Open the Reformat File dialog Ctrl+Alt+Shift+L and select the Optimize imports checkbox.
Toggling relative and absolute imports
PyCharm helps you organize relative and absolute imports within a source root. With the specific intention, you can convert absolute imports into relative and relative imports into absolute.
If your code contains any relative import statement, PyCharm will add relative imports when fixing the missing imports.
Note that relative imports work only within the current source root: you cannot relatively import a package from another source root.
The intentions prompting you to convert imports are enabled by default. To disable them, open project Settings/Preferences (Ctrl+Alt+S), select , and deselect the Convert absolute import to relative and Convert relative import to absolute.
Add import statements on code completion
PyCharm adds import statements when you complete exported JavaScript or TypeScript symbols.
You can disable auto-import on completion and use quick-fixes instead:
In the Settings/Preferences dialog Ctrl+Alt+S, go to .
On the Auto Import page that opens, use the checkboxes in the TypeScript/JavaScript area to enable or disable import generation on code completion. | https://www.jetbrains.com/help/pycharm/creating-and-optimizing-imports.html | CC-MAIN-2021-04 | refinedweb | 535 | 52.09 |
- 29 Nov, 2018 4 commits
- Toon Claes authored
- Imre Farkas authored
Adds gitlab.impersonation_enabled config option defaulting to true to keep the current default behaviour. Only the act of impersonation is modified, impersonation token management is not affected.
commit 10456b1e9240886432f565dd17689080bbb133b9 Merge: 312c1a9bdf8 a5f4627857b Author: Shinya Maeda <shinya@gitlab.com> Date: Thu Nov 29 14:33:21 2018 +0900 Merge branch 'master-ce' into add-counter-for-trace-chunks commit 312c1a9bdf8efc45c3fed5ff50f05cc589bbb4ed Author: Shinya Maeda <shinya@gitlab.com> Date: Wed Nov 28 20:06:18 2018 +0900 Fix coding offence commit e397cc2ccc1b2cf7f8b3558b8fa81fe2aa0ab366 Author: Shinya Maeda <shinya@gitlab.com> Date: Wed Nov 28 14:40:24 2018 +0900 Fix tracking archive failure
- 28 Nov, 2018 1 commit
- 27 Nov, 2018 3 commits
- Tiago Botelho authored
Clears the import related columns and code from the Project model over to the ProjectImportState model
We want to keep failed install pods around so that it is easier to debug why a failure occured. With this change we also need to ensure that we remove a previous pod with the same name before installing so that re-install does not fail. Another change here is that we no longer need to catch errors from delete_pod! in CheckInstallationProgressService as we now catch the ResourceNotFoundError in Helm::Api. The catch statement in CheckInstallationProgressService was also probably too broad before and should have been narrowed down simply to ResourceNotFoundError.
- 26 Nov, 2018 1 commit
- Chris Baumbauer authored
- 23 Nov, 2018 1 commit
Prefer `each` to `find` since the former does the same thing except doesn't add the extra delegation step that may be causing the binding_of_caller gem issues. Relates to
- 22 Nov, 2018 1 commit
- 21 Nov, 2018 3 commits
- Takuya Noguchi authored
Signed-off-by: Takuya Noguchi <takninnovationresearch@gmail.com>
Creating a merge request with `merge_request[force_remove_source_branch]` parameter would result in an Error 500 since this attribute was passed directly to the merge request. Fix this by properly parsing this attribute into `merge_params`. Closes
Ruby 2.5.3 in combination with Rails 5 appears to crash in development when a CI job is selected for a runner. For some reason, the return inside the block seems to seg fault. Let's get rid of this since this isn't a good coding practice in any case. Relates to Also see
- 19 Nov, 2018 2 commits
- Kamil Trzciński authored
This ensures that variables accept only string, alongside also improves kubernetes_namespace, improving validation and default value being set.
- 17 Nov, 2018 2 commits
- Jasper Maes authored
Admins would be prevented from adding a project deploy key since the accessible keys would be restricted to the user's keys. Also backports a spec for DeployKeysController from
- 15 Nov, 2018 1 commit
- Rémy Coutable authored
This forks live at and fixes an issue where default_value_for wouldn't handle `ActionController::Parameters` correctly with Rails 5. This fixes . Signed-off-by: Rémy Coutable <remy@rymai.me>
- 14 Nov, 2018 2 commits
With this quick action the user can create a new MR starting from the current issue using as `source_branch` the given `branch name` and as `target_branch` the project default branch. If the `branch name` is omitted a name is automatically created starting from the issue title.
- 13 Nov, 2018 5 commits
- Rémy Coutable authored
Signed-off-by: Rémy Coutable <remy@rymai.me>
- 12 Nov, 2018 2 commits
Move all logic for destroying a Pipeline into a service so it's easily reusable.
- Jarka Košanová authored
Extract code to make it easier reusable - introduce AttributesRewriter and ContentRewriter - support group entites when rewriting content - make Uploader copy_to working for Namespaces
- 09 Nov, 2018 2 commits
There is no reason to specially handle 404 here. We only handle 404 specifically if we are fetching something and want to return `nil` if 404. This does not apply here.
- 08 Nov, 2018 4 commits
- Robert Speicher authored
Previously the string was spanning multiple lines and included a needless `\n` character in the resulting error message. This change also reduces duplication by assigning two variables.
These applications will need further work in future issues to enable them to work with groups of projects.
- Add pages javascripts to intialize clusters for group pages - Move specs asserting gcp specific validations from controller into UpdateService spec - Also teach Clusters::ApplicationController about groups
- 07 Nov, 2018 6 commits
This reverts commit a82a595728d54bdc12e51dfcfb22e9eddc449143, reversing changes made to e7df959b8f99875edd246c7ac7779c3203e8755e.
-.
- Douwe Maan authored
This reverts merge request !22526
- Francisco Javier López authored
This new endpoint allow users to update a submodule's reference. The MR involves adding a new operation RPC operation in gitaly-proto (see gitlab-org/gitaly-proto!233) and change Gitaly to use this new version (see gitlab-org/gitaly!936). See gitlab-org/gitlab-ce!20949 | https://foss.heptapod.net/heptapod/heptapod/-/commits/2a246d83e9f488fd856d39e7f2ecf00064cc11c0/app/services | CC-MAIN-2022-21 | refinedweb | 784 | 51.99 |
Daniel Shahaf wrote:
>).
OK. To run this in production, it also must not error out in a normal
commit. I haven't fully traced the verify code to see whether it might
call svn_fs_fs__read_current().
> Julian Foad wrote:
>>)
Maybe we should propose this as an enhancement.
>> I meant the opposite direction: a successful commit does some things
>> post-commit (e.g. remove the txn directory, update the fulltext cache,
>> update the rep cache) and this verify must not assume any of those things
>> have been done already.
>
> Taking these one by one:
[...]
> So that's these three. Is there anything else that's updated between
> bumping 'current' and returning control to svn_fs_commit_txn()'s caller?
No.
> Or some kind of process-global or machine-global cache shared between
> the two FS handles, despite the different cache namespaces?
Not sure.
Nevertheless everything we discussed here gives me some confidence. Thanks.
- Julian
Received on 2017-01-31 13:01:00 CET
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2017-01/0093.shtml | CC-MAIN-2018-05 | refinedweb | 169 | 66.64 |
instructables, adafruit etc. But after a while it became boring and I've started looking for something actually useful for me. My previous playground was my new phone several weeks ago which came with new feature NFC reading. Thought and thought together gave me idea for attendance system for our small (family) company using NFC tags. The additional kick was my interest in those systems before, but discovering the cheapest are for around $750 I decided it is too much for such small company as 6 employees.
As I don't have any experience with "mature" attendance systems, I've decided only to implement basic features. These consist of logging incoming people, outcoming people, start and end of a break and deleting last inserted action (in case of mistype during logging).
All these actions are logged into local MySQL database from where I can display it and manipulate with my front-end application. Because the SD card is not such safe data medium, especially when loosing power unexpectedly, I'm uploading all data daily to my local server, where I keep backup in case of corruption of the SD card.
During normal operation of the logging station, display shows current date and time and calls for action selection. When you choose appropriate action on the keyboard, display shows selected action and calls for attaching the TAG to the reader. Also the LED under display also turns on.
When the TAG is read, the LED turns off and beep signal comes out the speaker. For a brief moment display shows action and name of the owner of the card. Then everything returns to the default state waiting for another entrance.
For foot notice, this whole project including source codes is licensed under Beer-ware licence as follows:
Jakub Dvorak wrote this file. As long as you retain this notice you can do whatever you want with this stuff. If we meet some day, and you think this stuff is worth it, you can buy me a beer in return.
hmm.. when running attendance.py
File "/home/pi/RPi.GPIO-0.5.7/attendance/MFRC522.py", line 11, in <module>
import spi
ImportError: No module named spi
install RPi.GPIO-0.5.7.tar.gz
Other skill with PNEV512R :
Quotation Required of Tags Attendance system.
This is great! Exactly what I have been looking for! Thanks so much. Im going to be getting my NFC reader from hong kong soon. I got it for £2.50 so its going to be fun. Thanks. | http://www.instructables.com/id/Attendance-system-using-Raspberry-Pi-and-NFC-Tag-r/ | CC-MAIN-2014-42 | refinedweb | 421 | 65.52 |
Compatible with Windows7 & Mac OS X Snow Leopard
Hi Deborah,
Thanks for your reply.
I am using usual namespaces in xsl file not in xml file as follows
<xsl:stylesheet version="2.0"
xmlns:xsl=""
xmlns:exsl=""
extension-element-prefixes="exsl"
xmlns:xalan=""
xmlns:
I am creating javax.xml.transform.Source object for xml file and pass it
to transform method of javax.xml.transform.Transformer.
I've written a java class that uses org.apache.fop.apps.Fop to convert
xml to PDF using xsl file.
Navpreet
-----Original Message-----
From: Deborah Pickett [mailto:debbiep-list-xsl@xxxxxxxxxx]
Sent: Wednesday, 25 March 2009 4:47 PM
To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [xsl] A colon is not allowed in the name
Hi Navpreet,
You haven't said whether this document:
> <header type="new">
> <?QM: GENERATOR [Ref] 10055: ParaHeading: NEW?>Generate New
Data
> </header>
uses namespaces anywhere, nor how you are reading it in, nor what you
are
using to process it. This matters, because:
> ERROR: 'A colon is not allowed in the name 'QM:' when namespaces are
> enabled.'
is a rule that comes straight from the Namespaces-in-XML spec:
QUOTE: in a namespace-well-formed document [...] No entity names,
processing instruction targets, or notation names contain any colons.
You can only avoid that restriction by (a) having whoever produced that
processing instruction stop using a colon (which you have said is
outside
your power), or (b) strictly avoiding the use of namespaces in the
source
document. Doing (b) may require you to remind the XML parser that it is
parsing a non-namespaced document (see
rFactory.html#isNamespaceAware()
for one such mechanism).
As an aside, you can get away with (b) only because the XML spec
reluctantly permits colons in names, though the spec pretty much
questions
the intelligence of anyone who does it:
QUOTE: The Namespaces in XML Recommendation [XML Names] assigns a
meaning
to names containing colon characters. Therefore, authors should not use
the colon in XML names except for namespace purposes
You should show that to the person who stopped you from pursuing option
.
The Company does not warrant nor guarantee that this email communication is
free
from errors, virus, interception or interference. | http://www.oxygenxml.com/archives/xsl-list/200903/msg00343.html | CC-MAIN-2015-22 | refinedweb | 367 | 61.36 |
This Tutorial Provides a Detailed Explanation of AVL Trees and Heap Data Structure In C++ Along with AVL Tree Examples for Better Understanding:
AVL Tree is a height-balanced binary tree. Each node is associated with a balanced factor which is calculated as the difference between the height of its left subtree and the right subtree.
The AVL tree is named after its two inventors i.e. G.M. Abelson-Velvety and E.M. Landis, and was published in 1962 in their paper “An algorithm for the organization of information”.
=> Look For The Entire C++ Training Series Here.
What You Will Learn:
AVL Tree In C++
For the tree to be balanced, the balanced factor for each node should be between -1 and 1. If not the tree will become unbalanced.
An Example AVL Tree is shown below.
In the above tree, we can notice that the difference in heights of the left and right subtrees is 1. This means that it’s a balanced BST. As the balancing factor is 1, this means that the left subtree is one level higher than the right subtree.
If the balancing factor is 0, then it means that the left and right subtrees are at the same level i.e. they contain equal height. If the balancing factor is -1, then the left subtree is one level lower than the right subtree.
AVL tree controls the height of a binary search tree and it prevents it from becoming skewed. Because when a binary tree becomes skewed, it is the worst case (O (n)) for all the operations. By using the balance factor, AVL tree imposes a limit on the binary tree and thus keeps all the operations at O (log n).
AVL Tree Operations
The following are the operations supported by AVL trees.
#1) AVL Tree Insertion
Insert operation in the C++ AVL tree is the same as that of the binary search tree. The only difference is that in order to maintain the balance factor, we need to rotate the tree to left or right so that it doesn’t become unbalanced.
.
// C++ program for AVL Tree #include<iostream> using namespace std; // An AVL tree node class AVLNode { public: int key; AVLNode *left; AVLNode *right; int depth; }; //get max of two integers int max(int a, int b){ return (a > b)? a : b; } //function to get height of the tree int depth(AVLNode *n) { if (n == NULL) return 0; return n->depth; } // allocate a new node with key passed AVLNode* newNode(int key) { AVLNode* node = new AVLNode(); node->key = key; node->left = NULL; node->right = NULL; node->depth = 1; // new node added as leaf return(node); } // right rotate the sub tree rooted with y AVLNode *rightRotate(AVLNode *y) { AVLNode *x = y->left; AVLNode *T2 = x->right; // Perform rotation x->right = y; y->left = T2; // Update heights y->depth = max(depth(y->left), depth(y->right)) + 1; x->depth = max(depth(x->left), depth(x->right)) + 1; // Return new root return x; } // left rotate the sub tree rooted with x AVLNode *leftRotate(AVLNode *x) { AVLNode *y = x->right; AVLNode *T2 = y->left; // Perform rotation y->left = x; x->right = T2; // Update heights x->depth = max(depth(x->left), depth(x->right)) + 1; y->depth = max(depth(y->left), depth(y->right)) + 1; // Return new root return y; } // Get Balance factor of node N int getBalance(AVLNode *N) { if (N == NULL) return 0; return depth(N->left) - depth(N->right); } //insertion operation for node in AVL tree AVLNode* insert(AVLNode* node, int key) { //normal BST rotation if (node == NULL) return(newNode(key)); if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); else // Equal keys not allowed return node; //update height of ancestor node node->depth = 1 + max(depth(node->left), depth(node->right)); int balance = getBalance(node); //get balance factor // rotate if unbalanced // node; } // find the node with minimum value AVLNode * minValueNode(AVLNode* node) { AVLNode* current = node; // find the leftmost leaf */ while (current->left != NULL) current = current->left; return current; } // delete a node from AVL tree with the given key AVLNode* deleteNode(AVLNode* root, int key) { if (root == NULL) return root; //perform BST delete if ( key < root->key ) root->left = deleteNode(root->left, key); else if( key > root->key ) root->right = deleteNode(root->right, key); else { // node with only one child or no child if( (root->left == NULL) || (root->right == NULL) ) { AVLNode *temp = root->left ? root->left : root->right; if (temp == NULL) { temp = root; root = NULL; } else // One child case *root = *temp; free(temp); } else { AVLNode* temp = minValueNode(root->right); root->key = temp->key; // Delete the inorder successor root->right = deleteNode(root->right, temp->key); } } if (root == NULL) return root; // update depth root->depth = 1 + max(depth(root->left), depth(root->right)); // get balance factor int balance = getBalance(root); //rotate the tree if unbalanced // Left Left Case if (balance > 1 && getBalance(root->left) >= 0) return rightRotate(root); // Left Right Case if (balance > 1 && getBalance(root->left) < 0) { root->left = leftRotate(root->left); return rightRotate(root); } // Right Right Case if (balance < -1 && getBalance(root->right) <= 0) return leftRotate(root); // Right Left Case if (balance < -1 && getBalance(root->right) > 0) { root->right = rightRotate(root->right); return leftRotate(root); } return root; } // prints inOrder traversal of the AVL tree void inOrder(AVLNode *root) { if(root != NULL) { inOrder(root->left); cout << root->key << " "; inOrder(root->right); } } // main code int main() { AVLNode *root = NULL; // constructing an AVL tree root = insert(root, 12); root = insert(root, 8); root = insert(root, 18); root = insert(root, 5); root = insert(root, 11); root = insert(root, 17); root = insert(root, 4); //Inorder traversal for above tree : 4 5 8 11 12 17 18 cout << "Inorder traversal for the AVL tree is: \n"; inOrder(root); root = deleteNode(root, 5); cout << "\nInorder traversal after deletion of node 5: \n"; inOrder(root); return 0; }
Output:
Inorder traversal for the AVL tree is:
4 5 8 11 12 17 18
Inorder traversal after deletion of node 5:
4 8 11 12 17 18
Note that we have used the example tree shown above to demonstrate the AVL Tree in the program.
Applications Of AVL Trees
- AVL trees are mostly used for in-memory sorts of sets and dictionaries.
- AVL trees are also used extensively in database applications in which insertions and deletions are fewer but there are frequent lookups for data required.
- It is used in applications that require improved searching apart from the database applications.
HEAP Data Structure In C++
A heap in C++ is a special tree-based data structure and is a complete binary tree.
Heaps can be of two types:
- Min-heap: In min-heap, the smallest element is the root of the tree and each node is greater than or equal to its parent.
- Max-heap: In max-heap, the largest element is the root of the tree and each node is less than or equal to its parent.
Consider the following array of elements:
10 20 30 40 50 60 70
The min-heap for the above data is represented below:
The max heap using the above data is shown below:
Binary Heap C++
A binary heap is the common implementation of a heap data structure.
A binary heap has the following properties:
- It is a complete binary tree when all the levels are completely filled except possibly the last level and the last level has its keys as much left as possible.
- A binary heap can be a min-heap or max-heap.
A binary heap is a complete binary tree and thus it can best be represented as an array.
Let’s look into the array representation of binary heap.
Consider the following binary heap.
In the above diagram, traversal for the binary heap is called level order.
Thus the array for the above binary heap is shown below as HeapArr:
As shown above, HeapArr[0] is the root of the binary heap. We can represent the other elements in general terms as follows:
If HeapArr[i] is an ith node in a binary heap, then the indexes of the other nodes from the ith node are:
- HeapArr [(i-1)/2] => Returns the parent node.
- HeapArr [(2*i)+1] => Returns the left child node.
- HeapArr [(2*i)+2] => Returns the right child node.
The binary heap satisfies “ordering property” which is of two types as stated below:
- Min Heap property: The minimum value is at the root and the value of each node is greater than or equal to its parent.
- Max Heap property: The maximum value is at the root and the value of each node is less than or equal to its parent.
Operations On Binary Heap
The following are the basic operations that are carried out on minimum heap. In the case of the maximum heap, the operations reverse accordingly.
#1) Insert() – Inserts a new key at the end of the tree. Depending on the value of the key inserted, we may have to adjust the heap, without violating the heap property.
#2) Delete() – Deletes a key. Note that the time complexity of both the insert and delete operations of the heap is O (log n).
#3) decreaseKey() – Decreases the value of the key. We might need to maintain the heap property when this operation takes place. The time complexity of the decreaseKey operation of the heap is also O (log n).
#4) extractMin() – Removes the minimum element from the min-heap. It needs to maintain the heap property after removing the minimum element. Thus its time complexity is O (log n).
#5) getMin() – Returns the root element of the min-heap. This is the simplest operation and the time complexity for this operation is O(1).
Heap Data Structure Implementation
Given below is the C++ implementation to demonstrate the basic functionality of min-heap.
#include<iostream> #include<climits> using namespace std; // swap two integers void swap(int *x, int *y) { int temp = *x; *x = *y; *y = temp; } // Min-heap class class Min_Heap { int *heaparr; // pointer to array of elements in heap int capacity; // maximum capacity of min heap int heap_size; // current heap size public: Min_Heap(int cap){ heap_size = 0; capacity = cap; heaparr = new int[capacity]; } // to heapify a subtree with the root at given index void MinHeapify(int ); int parent(int i) { return (i-1)/2; } // left child of node i int left(int i) { return (2*i + 1); } // right child of node i int right(int i) { return (2*i + 2); } // extract minimum element in the heap(root of the heap) int extractMin(); // decrease key value to newKey at i void decreaseKey(int i, int newKey); // returns root of the min heap int getMin() { return heaparr[0]; } // Deletes a key at i void deleteKey(int i); // Inserts a new key 'key' void insertKey(int key); void displayHeap(){ for(int i = 0;i<heap_size;i++) cout<<heaparr[i]<<" "; cout<<endl; } }; // Inserts a new key 'key' void Min_Heap::insertKey(int key) { if (heap_size == capacity) { cout << "\nOverflow: Could not insertKey\n"; return; } // First insert the new key at the end heap_size++; int i = heap_size - 1; heaparr[i] = key; // Fix the min heap property if it is violated while (i != 0 && heaparr[parent(i)] > heaparr[i]) { swap(&heaparr[i], &heaparr[parent(i)]); i = parent(i); } } void Min_Heap::decreaseKey(int i, int newKey) { heaparr[i] = newKey; while (i != 0 && heaparr[parent(i)] > heaparr[i]) { swap(&heaparr[i], &heaparr[parent(i)]); i = parent(i); } } int Min_Heap::extractMin() { if (heap_size <= 0) return INT_MAX; if (heap_size == 1) { heap_size--; return heaparr[0]; } // Store the minimum value,delete it from heap int root = heaparr[0]; heaparr[0] = heaparr[heap_size-1]; heap_size--; MinHeapify(0); return root; } void Min_Heap::deleteKey(int i) { decreaseKey(i, INT_MIN); extractMin(); } void Min_Heap::MinHeapify(int i) { int l = left(i); int r = right(i); int min = i; if (l < heap_size && heaparr[l] < heaparr[i]) min = l; if (r < heap_size && heaparr[r] < heaparr[min]) min = r; if (min != i) { swap(&heaparr[i], &heaparr[min]); MinHeapify(min); } } // main program int main() { Min_Heap h(11); h.insertKey(2); h.insertKey(4); h.insertKey(6); h.insertKey(8); h.insertKey(10); h.insertKey(12); cout<<"Heap after insertion:"; h.displayHeap(); cout<<"root of the heap: "<<h.getMin()<<endl; h.deleteKey(2); cout<<"Heap after deletekey(2):"; h.displayHeap(); cout <<"minimum element in the heap: "<< h.extractMin() <<endl; h.decreaseKey(1, 1); cout <<"new root of the heap after decreaseKey: "<< h.getMin()<<endl; return 0; }
Output:
Heap after insertion:2 4 6 8 10 12
root of the heap: 2
Heap after deletekey(2):2 4 12 8 10
minimum element in the heap: 2
new root of the heap after decreaseKey: 1
Applications Of Heaps
- Heapsort: Heapsort algorithm is effectively implemented using a binary heap.
- Priority Queues: Binary heap supports all the operations required for successfully implementing the priority queues in O (log n) time.
- Graph algorithms: Some of the algorithms related to graphs use priority queue and in turn, the priority queue uses binary heap.
- Worst-case complexity of quicksort algorithm can be overcome by using heap sort.
Conclusion
In this tutorial, we have seen two data structures i.e. AVL trees and Heap sort in detail.
AVL trees are balanced binary trees that are mostly used in database indexing.
All the operations performed on AVL trees are similar to those of binary search trees but the only difference in the case of AVL trees is that we need to maintain the balance factor i.e. the data structure should remain a balanced tree as a result of various operations. This is achieved by using the AVL Tree Rotation operation.
Heaps are complete binary tree structures that are classified into min-heap or max-heap. Min-heap has the minimum element as its root and the subsequent nodes are greater than or equal to their parent node. In max-heap, the situation is exactly opposite i.e. the maximum element is the root of the heap.
Heaps can be represented in the form of arrays with the 0th element as the root of the tree. Heap data structures are mainly used to implement heap sort and priority queues.
=> Visit Here To Learn C++ From Scratch. | https://www.softwaretestinghelp.com/avl-trees-and-heap-data-structure-in-cpp/ | CC-MAIN-2021-17 | refinedweb | 2,362 | 55.58 |
Trying to make Hello World native module for node.js
Got an Win32 Project in VS 2012 with one file:
#include <node.h> #include <v8.h> using namespace v8; Handle<Value> Method(const Arguments& args) { HandleScope scope; return scope.Close(String::New("world")); } void init(Handle<Object> target) { target->Set(String::NewSymbol("hello"), FunctionTemplate::New(Method)->GetFunction()); } NODE_MODULE(hello, init)
That`s compiles to hello.node.
Options:
– Dynamic Library (.dll)
– No Common Language Runtime Support
Use it like:
hello = require './hello' console.log hello.hello()
It works on local machine (win8 x64, node: 0.8.12)
But on remote server (windows server 2008 x64, node: 0.8.12, iisnode: 0.1.21 x64, iis7) it throws this error:
Application has thrown an uncaught exception and is terminated: Error:
%1 is not a valid Win32 application.)
What I tryed:
Playing with app pool settings (enable win32 apps) does not helped.
Iisnode x86 does not install on x64 os.
Can`t compile to x64 because of error: Error 2 error LNK1112: module machine type ‘X86’ conflicts with target machine type ‘x64’ C:\derby\hello\build\node.lib(node.exe) hello
Does anyone have any suggestions?
I dont know if it’s too late, but I found the answer after some trial and error, mainly the problem (in my machine) was that I compiled the nodejs on windows to be able to create the extension using visual C++, and I already had installed the nodejs from the page, if I try to run the test using the default installation (which was added to my PATH by the nodejs installer) then it fails, but if I use the compiled node.exe (the one I compiled to be able to reference the libs in Visual C++) then it works.
In summary, the problem is not with the extension, it’s with the nodejs compilation, use the node that you compiled (in order to build the VS solution I assume you did that) and then it should work on the remote machine.
Note: The problem relies on that you’re using node.exe compiled in 64bits to run a 32bits dll, that’s why it’s complaining, if you use node.exe in 32 bits it should work. (at least that solved my problem)
Unrelated to your probem: I get the same error (
Error: %1 is not a valid Win32 application) when trying to execute a script with extension “.node”, e.g.
node.exe example.node. Other extensions (.js, .txt, no extension at all) work fine.
Just had the same problem and even though the architectures of my node and addon were identical, I got similar errors messages. It turns out that you can’t rename the node executable. It has to be
node.exe, I was trying to test multiple versions at the same time so I had to put them in their own folders. After that it all worked fine. | https://exceptionshub.com/node-js-native-module-is-not-a-valid-win32-application-error.html | CC-MAIN-2021-21 | refinedweb | 486 | 64.1 |
pause - suspend the thread until signal is received
#include <unistd.h> int pause(void);
The pause() function suspends the calling thread until delivery of a signal whose action is either to execute a signal-catching function or to terminate the process.
If the action is to terminate the process, pause() will not return.
If the action is to execute a signal-catching function, pause() will return after the signal-catching function returns.
Since pause() suspends thread execution indefinitely unless interrupted by a signal, there is no successful completion return value. A value of -1 is returned and errno is set to indicate the error.
The pause() function will fail if:
- [EINTR]
- A signal is caught by the calling process and control is returned from the signal-catching function.
None.
None.
None.
sigsuspend(), <unistd.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/pause.html | CC-MAIN-2015-06 | refinedweb | 141 | 54.42 |
c# interview questions and answers paper 380 - skillgunNote: Paper virtual numbers may be different from actual paper numbers . In the page numbers section website displaying virtual numbers .
what is the use of using statement in c# ?
Using statement used for deleting managed objects.
Using statement used for calling dispose method of Idisposable interface properly by using try and finally pattern.
Using statement is used as a replacement to the C#'s finalize method.
None of the above.
using (TextReader tr = new StreamReader(@"C:\patients.txt"))
{
string allPatients=tr.ReadToEnd();
}
assume you are not using TextReader object without using or without wrapping it in using block
you must use try and finally pattern to ensure proper cleaning of unmanaged resources used by TextReader
object as shown below. If you observer definition of TextReader ( to see definition of a class in visual studio rightclick
on classname and click on GoTo Definition ) it is implementing IDisposable interface , hence you can say
that TextReader is using un managed resources .
TextReader tr1 = new StreamReader(@"C:\patients.txt");
try
{
string data = tr1.ReadToEnd();
}
finally
{
tr1.Dispose();
}
In simple words using block is converted into equivalent try and finally pattern for calling dispose method appropriately .
How to call dispose method in C# ?
Use try and finally pattern for calling dispose method.
Use using statement to wrap an object which is using unmanaged resources to call its dispose method automatically.
It is always better to call dispose method in destructors.
refer the previous question for better understanding .
identify the execution sequence or order of destructors for the following sample code ?
public class Test
{
public static void Main(string[] args)
{
B b = new B();
}
}
public class A
{
~A()
{
Console.WriteLine("Killing A");
}
}
public class B : A
{
~B()
{
Console.WriteLine("Killing B");
}
}
Only derived class destructor is executed.
Only base class destructor is executed.
First base class destructor is executed and then derived class destructor is executed.
First derived class destructor is executed then base class destructor is executed.
Execution sequence of destructors is just reverse of the execution of constructors in presence of inheritance.
Proof: How to prove execution sequence ?
Ans: write the code given in the question and place debug pointers in base and derived class destructors and observe execution.
What is the use of SuppressFinalize method of garbage collector class ?
suppressfinalize method stops garbage collector acting upon an object which is passed to it.
suppressfinalize method stops garbage collector from calling a destructor of an object which is passed to it.
suppressFinalize method used for calling garbage collector explicitly.
Statements 1 and 2 are correct.
What is the use of f-reachable queue ?
f-reachable queue contains all the the class objects which are having destructors associated .
Garbage collector will not delete the objects which are having references in f-reachable queue.
Statements 1 and 2 are correct.
what is the use of finalization queue ?
Garbage Collector removes all the objects which are having references in finalization queue by calling finalize methods on those objects.
Garbage Collector will not delete the objects which are having references in finalization queue.
Garbage Collector uses the finalization queue to identify the classes which are implementing Idisposable interface.
When an object reference is present in finalization queue means , the corresponding objects unmanaged resource has been already cleaned or deleted . Hence GC can able to delete the managed object which was using unmanaged resource.
How .
How Garbage Collector identifies dead objects ?
By reading references from CPU registers also by reading data from locals or stack memory variables.
By looking at data present in unmanaged heap.
By looking at the data present Managed heap memory .
for more details read
How to check number of times garbage collection has occurred in a Generation?
use GC.CollectionCount(generationNumber)
No way to check number garbage collections
use GC.Collect() method
Use GC.GetGeneration()
How to check the presence of particular object in a Generation?
use GC.Generation(objectreference)
use GC.GetGeneration(objectreference)
use GC.Collect() method .
use GC.GetGenerationNumber() .
Back To Top | http://skillgun.com/csharp/interview-questions-and-answers/paper/380 | CC-MAIN-2017-30 | refinedweb | 666 | 51.44 |
For the following Fortran program, the recursion could be replaced by a loop. That's what happening for the related C program, but for the Fortran program the recursion remains. (Tried with -O3.)
(I guess the I/O statement (_gfortran_st_write* and _gfortran_transfer_*_write) confuse the ME, but I do not see why that should prevent the loop transformation. Hints how to modify the FE to help the ME are welcome, too.)
! Fortran version
call x (1)
contains
recursive subroutine x (i)
use iso_fortran_env
integer, value :: i
if (mod (i, 1000000) == 0) write (error_unit,'(a,i0)')'i=', i
call x (i+1)
end subroutine x
end
/* C version */
#include <stdio.h>
static void
x (int i) {
if (!(i % 1000000))
fprintf(stderr, "i=%d\n", i);
x(i + 1);
}
int main () {
x (1);
}
The parameter has its address taken in
_gfortran_transfer_integer_write (&dt_parm.0, &i, 4);
thus we can't tail-recurse here (well, at least not as trivial as if
that wasn't the case).
(In reply to comment #1)
> The parameter has its address taken in
> _gfortran_transfer_integer_write (&dt_parm.0, &i, 4);
True, but the value is not modified - which the ME should know as the "fn spec" is ".wR"
(In reply to comment #2)
> (In reply to comment #1)
> > The parameter has its address taken in
> > _gfortran_transfer_integer_write (&dt_parm.0, &i, 4);
>
> True, but the value is not modified - which the ME should know as the "fn spec"
> is ".wR"
The parameter is not in SSA form so tail-recursion would have to insert
a store and reload the value at appropriate places. It doesn't support that. | https://gcc.gnu.org/bugzilla/show_bug.cgi?format=multiple&id=48363 | CC-MAIN-2015-27 | refinedweb | 266 | 63.29 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.