text stringlengths 12 14.7k |
|---|
Generalized blockmodeling : According to Doreian, the benefits of generalized blockmodeling, are as follows: unknown sensitivity to particular data features, examination of boundary problems, computationally burdensome, which results in a constraint regarding practical network size (generalized blockmodeling is thus pr... |
Generalized blockmodeling : The book with the same title, Generalized blockmodeling, written by Patrick Doreian, Vladimir Batagelj and Anuška Ferligoj, was in 2007 awarded the Harrison White Outstanding Book Award by the Mathematical Sociology Section of American Sociological Association. |
Generalized blockmodeling : Patrick Doreian, Vladimir Batagelj, Anuška Ferligoj, Mark Granovetter (Series Editor), Generalized Blockmodeling (Structural Analysis in the Social Sciences), Cambridge University Press 2004 (ISBN 0-521-84085-6) |
Generalized blockmodeling of binary networks : Generalized blockmodeling of binary networks (also relational blockmodeling) is an approach of generalized blockmodeling, analysing the binary network(s). As most network analyses deal with binary networks, this approach is also considered as the fundamental approach of bl... |
Generalized blockmodeling of binary networks : homogeneity blockmodeling binary relation binary matrix |
Generalized blockmodeling of valued networks : Generalized blockmodeling of valued networks is an approach of the generalized blockmodeling, dealing with valued networks (e.g., non-binary). While the generalized blockmodeling signifies a "formal and integrated approach for the study of the underlying functional anatomi... |
Generalized blockmodeling of valued networks : Generalized blockmodeling of binary networks Homogeneity blockmodeling |
Homogeneity blockmodeling : In mathematics applied to analysis of social structures, homogeneity blockmodeling is an approach in blockmodeling, which is best suited for a preliminary or main approach to valued networks, when a prior knowledge about these networks is not available. This is because homogeneity blockmodel... |
Homogeneity blockmodeling : Generalized blockmodeling of binary networks implicit blockmodeling blockmodeling linked networks Homogeneity and heterogeneity |
Implicit blockmodeling : Implicit blockmodeling is an approach in blockmodeling, similar to a valued and homogeneity blockmodeling, where initially an additional normalization is used and then while specifying the parameter of the relevant link is replaced by the block maximum. This approach was first proposed by Batag... |
Andrej Mrvar : Andrej Mrvar is a Slovenian computer scientist and a professor at the University of Ljubljana. He is known for his work in network analysis, graph drawing, decision making, virtual reality, electronic timing and data processing of sports competitions. |
Andrej Mrvar : He is well known for his work on Pajek, a free software for analysis and visualization of large networks. Mrvar began work on Pajek in 1996 with Vladimir Batagelj. His book Exploratory Social Network Analysis with Pajek, coauthored with Wouter de Nooy and Vladimir Batagelj, is his most cited work. It was... |
Andrej Mrvar : Vidmar Award (Faculty of Electrical and Computer Engineering, University of Ljubljana): 1988, 1990 First prizes for contributions (with Vladimir Batagelj) to Graph Drawing Contests in years: 1995, 1996, 1997, 1998, 1999, 2000 and 2005 / Graph Drawing Hall of Fame. Award of University of Ljubljana for con... |
Andrej Mrvar : Wouter de Nooy, Andrej Mrvar, Vladimir Batagelj, Mark Granovetter (Series Editor), Exploratory Social Network Analysis with Pajek (Structural Analysis in the Social Sciences), Cambridge University Press (First Edition: 2005, Second Edition: 2011, Third Edition: 2018), (ISBN 0-521-60262-9). Japanese Trans... |
Andrej Mrvar : Andrej Mrvar publications indexed by Google Scholar |
Stochastic block model : The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between... |
Stochastic block model : The stochastic block model takes the following parameters: The number n of vertices; a partition of the vertex set into disjoint subsets C 1 , … , C r ,\ldots ,C_ , called communities; a symmetric r × r matrix P of edge probabilities. The edge set is then sampled at random as follows: any ... |
Stochastic block model : If the probability matrix is a constant, in the sense that P i j = p =p for all i , j , then the result is the Erdős–Rényi model G ( n , p ) . This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model. The plante... |
Stochastic block model : Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery. |
Stochastic block model : Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds. Suppose that we allow the size n of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feas... |
Stochastic block model : In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likeli... |
Stochastic block model : Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition. More significant variants include the degree-corrected stochastic block model, the hierarchical stochastic block model, the... |
Stochastic block model : Stochastic block model have been recognised to be a topic model on bipartite networks. In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning. |
Stochastic block model : Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or... |
Stochastic block model : GraphChallenge encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition ... |
Stochastic block model : blockmodeling Girvan–Newman algorithm – Community detection algorithm Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces for generating benchmark networks with communities == References == |
Harrison White : Harrison Colyar White (March 21, 1930 – May 18, 2024) was an American sociologist who was the Giddings Professor of Sociology at Columbia University. White played an influential role in the “Harvard Revolution” in social networks and the New York School of relational sociology. He is credited with the ... |
Harrison White : A good summary of White's sociological contributions is provided by his former student and collaborator, Ronald Breiger: White addresses problems of social structure that cut across the range of the social sciences. Most notably, he has contributed (1) theories of role structures encompassing classific... |
Harrison White : In addition to his own publications, White is widely credited with training many influential generations of network analysts in sociology. Including the early work in the 1960s and 1970s during the Harvard Revolution, as well as the 1980s and 1990s at Columbia during the New York School of relational s... |
Harrison White : White died at an assisted living facility in Tucson, on May 19, 2024, at the age of 94. |
Harrison White : Azarian, Reza. (2003). The General Sociology of Harrison White, Stockholm, Sweden: Stockholm Studies in Social Mechanisms: 135-140 Breiger, Ronald. L. (2005). White, Harrison. Encyclopedia of Social Theory. G. Ritzer. Thousand Oaks, Sage. 2: 884-886. Coase, Ronald H. (1990). The Firm, The Market and Th... |
Harrison White : Weis, J.G. and Matza, D., 1971. Dialogue with David Matza. Issues in Criminology, 6(1), pp.33-53. |
Harrison White : Faculty Website at Columbia University Archived 2018-04-10 at the Wayback Machine Interview with Harrison White by Alair MacLean and Andy Olds Event held in honor of Harrison White SocioSite: Famous Sociologists - Harrison White Information resources on life, academic work and intellectual influence of... |
Bayesian network : A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal n... |
Bayesian network : Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (... |
Bayesian network : Let us use an illustration to enforce the concepts of a Bayesian network. Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that tw... |
Bayesian network : Bayesian networks perform three main inference tasks: |
Bayesian network : Given data x and parameter θ , a simple Bayesian analysis starts with a prior probability (prior) p ( θ ) and likelihood p ( x ∣ θ ) to compute a posterior probability p ( θ ∣ x ) ∝ p ( x ∣ θ ) p ( θ ) . Often the prior on θ depends in turn on other parameters φ that are not mentioned in the l... |
Bayesian network : Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V. |
Bayesian network : In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Pa... |
Bayesian network : Notable software for Bayesian networks include: Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling. OpenBUGS – Open-source development of WinBUGS. SPSS Modeler – Commercial software that includes an implementation for Bayesian networks. Stan (software) – Stan ... |
Bayesian network : The term Bayesian network was coined by Judea Pearl in 1985 to emphasize: the often subjective nature of the input information the reliance on Bayes' conditioning as the basis for updating information the distinction between causal and evidential modes of reasoning In the late 1980s Pearl's Probabili... |
Bayesian network : Conrady S, Jouffe L (2015-07-01). Bayesian Networks and BayesiaLab – A practical introduction for researchers. Franklin, Tennessee: Bayesian USA. ISBN 978-0-9965333-0-0. Charniak E (Winter 1991). "Bayesian networks without tears" (PDF). AI Magazine. Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrech... |
Bayesian network : An Introduction to Bayesian Networks and their Contemporary Applications On-line Tutorial on Bayesian nets and probability Web-App to create Bayesian nets and run it with a Monte Carlo method Continuous Time Bayesian Networks Bayesian Networks: Explanation and Analogy A live tutorial on learning Baye... |
Bayesian hierarchical modeling : Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate the... |
Bayesian hierarchical modeling : Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters. Individual degrees of belief, expressed in the form of probabilities,... |
Bayesian hierarchical modeling : The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options. Suppose in a study of the effectiveness of cardiac treatments, with the... |
Bayesian hierarchical modeling : The usual starting point of a statistical analysis is the assumption that the n values y 1 , y 2 , … , y n ,y_,\ldots ,y_ are exchangeable. If no information – other than data y – is available to distinguish any of the θ j ’s from any others, and no ordering or grouping of the paramete... |
Bayesian hierarchical modeling : A three stage version of Bayesian hierarchical modeling could be used to calculate probability at 1) an individual level, 2) at the level of population and 3) the prior, which is an assumed probability distribution that takes place before evidence is initially acquired: Stage 1: Individ... |
Causal Markov condition : The Markov condition, sometimes called the Markov assumption, is an assumption made in Bayesian probability theory, that every node in a Bayesian network is conditionally independent of its nondescendants, given its parents. Stated loosely, it is assumed that a node has no bearing on nodes whi... |
Causal Markov condition : Statisticians are enormously interested in the ways in which certain events and variables are connected. The precise notion of what constitutes a cause and effect is necessary to understand the connections between them. The central idea behind the philosophical study of probabilistic causation... |
Causal Markov condition : In a simple view, releasing one's hand from a hammer causes the hammer to fall. However, doing so in outer space does not produce the same outcome, calling into question if releasing one's fingers from a hammer always causes it to fall. A causal graph could be created to acknowledge that both ... |
Causal Markov condition : Causal model == Notes == |
Influence diagram : An influence diagram (ID) (also called a relevance diagram, decision diagram or a decision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision makin... |
Influence diagram : An ID is a directed acyclic graph with three types (plus one subtype) of node and three types of arc (or arrow) between nodes. Nodes: Decision node (corresponding to each decision to be made) is drawn as a rectangle. Uncertainty node (corresponding to each uncertainty to be modeled) is drawn as an o... |
Influence diagram : Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation. There is 1 decision node (Vacation Activity), 2 uncertainty nodes (Weather Condition, Weather Forecast), and 1 value node (Satisfaction). There are 2 functional arcs (ending in Satisfact... |
Influence diagram : The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as the value of information. Consider the following three scenarios; Scenario 1: The decision-maker could make their Vacation Activity decision while knowing what... |
Influence diagram : Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical r... |
Influence diagram : Detwarasiti, A.; Shachter, R.D. (December 2005). "Influence diagrams for team decision analysis" (PDF). Decision Analysis. 2 (4): 207–228. doi:10.1287/deca.1050.0047. Holtzman, Samuel (1988). Intelligent decision systems. Addison-Wesley. ISBN 978-0-201-11602-1. Howard, R.A. and J.E. Matheson, "Influ... |
Influence diagram : What are influence diagrams? Pearl, J. (December 2005). "Influence Diagrams — Historical and Personal Perspectives" (PDF). Decision Analysis. 2 (4): 232–4. doi:10.1287/deca.1050.0055. |
Junction tree algorithm : The junction tree algorithm (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into diff... |
Junction tree algorithm : Lauritzen, Steffen L.; Spiegelhalter, David J. (1988). "Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems". Journal of the Royal Statistical Society. Series B (Methodological). 50 (2): 157–224. doi:10.1111/j.2517-6161.1988.tb01721.x. JSTOR 23... |
Latent and observable variables : In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in man... |
Latent and observable variables : There exists a range of different model classes and methodology that make use of latent variables and allow inference in the presence of latent variables. Models include: linear mixed-effects models and nonlinear mixed-effects models Hidden Markov models Factor analysis Item response t... |
Latent and observable variables : Kmenta, Jan (1986). "Latent Variables". Elements of Econometrics (Second ed.). New York: Macmillan. pp. 581–587. ISBN 978-0-02-365070-3. |
Markov blanket : In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. If a Markov blanket is minimal, meaning that it cannot d... |
Markov blanket : A Markov blanket of a random variable Y in a random variable set S = =\,\ldots ,X_\ is any subset S 1 _ of S , conditioned on which other variables are independent with Y : Y ⊥ ⊥ S ∖ S 1 ∣ S 1 . \backslash _\mid _. It means that S 1 _ contains at least all the information one needs to infer Y , wh... |
Markov blanket : A Markov boundary of Y in S is a subset S 2 _ of S , such that S 2 _ itself is a Markov blanket of Y , but any proper subset of S 2 _ is not a Markov blanket of Y . In other words, a Markov boundary is a minimal Markov blanket. The Markov boundary of a node A in a Bayesian network is the set of n... |
Markov blanket : Andrey Markov Free energy minimisation Moral graph Separation of concerns Causality Causal inference == Notes == |
Moral graph : In graph theory, a moral graph is used to find the equivalent undirected form of a directed acyclic graph. It is a key step of the junction tree algorithm, used in belief propagation on graphical models. The moralized counterpart of a directed acyclic graph is formed by adding edges between all pairs of n... |
Moral graph : A graph is weakly recursively simplicial if it has a simplicial vertex and the subgraph after removing a simplicial vertex and some edges (possibly none) between its neighbours is weakly recursively simplicial. A graph is moral if and only if it is weakly recursively simplicial. A chordal graph (a.k.a., r... |
Moral graph : Unlike chordal graphs that can be recognised in polynomial time, Verma & Pearl (1993) proved that deciding whether or not a graph is moral is NP-complete. |
Moral graph : D-separation Tree decomposition |
Moral graph : M. Studeny: On mathematical description of probabilistic conditional independence structures |
Plate notation : In Bayesian inference, plate notation is a method of representing variables that repeat in a graphical model. Instead of drawing each repeated variable individually, a plate or rectangle is used to group variables into a subgraph that repeat together, and a number is drawn on the plate to represent the... |
Plate notation : In this example, we consider Latent Dirichlet allocation, a Bayesian network that models how documents in a corpus are topically related. There are two variables not in any plate; α is the parameter of the uniform Dirichlet prior on the per-document topic distributions, and β is the parameter of the un... |
Plate notation : A number of extensions have been created by various authors to express more information than simply the conditional relationships. However, few of these have become standard. Perhaps the most commonly used extension is to use rectangles in place of circles to indicate non-random variables—either parame... |
Plate notation : Plate notation has been implemented in various TeX/LaTeX drawing packages, but also as part of graphical user interfaces to Bayesian statistics programs such as BUGS and BayesiaLab and PyMC. == References == |
Variational message passing : Variational message passing (VMP) is an approximate inference technique for continuous- or discrete-valued Bayesian networks, with conjugate-exponential parents, developed by John Winn. VMP was developed as a means of generalizing the approximate variational methods used by such techniques... |
Variational message passing : Given some set of hidden variables H and observed variables V , the goal of approximate inference is to maximize a lower-bound on the probability that a graphical model is in the configuration V . Over some probability distribution Q (to be defined later), ln P ( V ) = ∑ H Q ( H ) ln... |
Variational message passing : The likelihood estimate needs to be as large as possible; because it's a lower bound, getting closer log P improves the approximation of the log likelihood. By substituting in the factorized version of Q , L ( Q ) , parameterized over the hidden nodes H i as above, is simply the nega... |
Variational message passing : Parents send their children the expectation of their sufficient statistic while children send their parents their natural parameter, which also requires messages to be sent from the co-parents of the node. |
Variational message passing : Because all nodes in VMP come from exponential families and all parents of nodes are conjugate to their children nodes, the expectation of the sufficient statistic can be computed from the normalization factor. |
Variational message passing : The algorithm begins by computing the expected value of the sufficient statistics for that vector. Then, until the likelihood converges to a stable value (this is usually accomplished by setting a small threshold value and running the algorithm until it increases by less than that threshol... |
Variational message passing : Because every child must be conjugate to its parent, this has limited the types of distributions that can be used in the model. For example, the parents of a Gaussian distribution must be a Gaussian distribution (corresponding to the Mean) and a gamma distribution (corresponding to the pre... |
Variational message passing : Winn, J.M.; Bishop, C. (2005). "Variational Message Passing" (PDF). Journal of Machine Learning Research. 6: 661–694. Beal, M.J. (2003). Variational Algorithms for Approximate Bayesian Inference (PDF) (PhD). Gatsby Computational Neuroscience Unit, University College London. Archived from t... |
Variational message passing : Infer.NET: an inference framework which includes an implementation of VMP with examples. dimple: an open-source inference system supporting VMP. An older implementation of VMP with usage examples. |
Activity recognition : Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in ... |
Activity recognition : There are some popular datasets that are used for benchmarking activity recognition or action recognition algorithms. UCF-101: It consists of 101 human action classes, over 13k clips and 27 hours of video data. Action classes include applying makeup, playing dhol, cricket shot, shaving beard, etc... |
Activity recognition : By automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support to location-based services. Activity recognition systems have been... |
Activity recognition : AI effect Applications of artificial intelligence Conditional random field Gesture recognition Hidden Markov model Motion analysis Naive Bayes classifier Support vector machines Object co-segmentation Outline of artificial intelligence Video content analysis == References == |
AlchemyAPI : AlchemyAPI was a software company in the field of machine learning. Its technology employed deep learning for various applications in natural language processing, such as semantic text analysis and sentiment analysis, as well as computer vision. AlchemyAPI offered both traditionally-licensed software produ... |
AlchemyAPI : As the name suggests, the business model of charging for access to an API was central to the company's identity and uncommon for its time: A TechCrunch article highlighted that even though the technology was similar to IBM's Watson, the pay-per-use model made it more accessible, especially to non-enterpris... |
AlchemyAPI : AlchemyAPI was founded by Elliot Turner in 2005, and launched their API in 2009. In September 2011, ProgrammableWeb added AlchemyAPI to its API Billionaires Club, alongside giants such as Google and Facebook. In February 2013, it was announced that AlchemyAPI had raised US$2 million to improve the capabili... |
AlchemyAPI : A February 2013 article in VentureBeat about big data named AlchemyAPI as one of the primary forces responsible for bringing natural language processing capabilities to the masses. In November 2013, GigaOm listed AlchemyAPI as one of the top startups working in deep learning, along with Cortica and Ersatz. |
AlchemyAPI : Official website |
AlphaDev : AlphaDev is an artificial intelligence system developed by Google DeepMind to discover enhanced computer science algorithms using reinforcement learning. AlphaDev is based on AlphaZero, a system that mastered the games of chess, shogi and go by self-play. AlphaDev applies the same approach to finding faster ... |
AlphaDev : On June 7, 2023, Google DeepMind published a paper in Nature introducing AlphaDev, which discovered new algorithms that outperformed the state-of-the-art methods for small sort algorithms. For example, AlphaDev found a faster assembly language sequence for sorting 5-element sequences. Upon analysing the algo... |
AlphaDev : AlphaDev is built on top of AlphaZero, the reinforcement-learning model that DeepMind trained to master games such as Go and chess. The company's breakthrough was to treat the problem of finding a faster algorithm as a game and then train its AI to win it. AlphaDev plays a single-player game where the object... |
AlphaDev : The primary learning algorithm in AlphaDev is an extension of AlphaZero. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.