GTD Tiddly Wiki is a GettingThingsDone adaptation by NathanBowers of JeremyRuston's Open Source TiddlyWiki

The purpose of GTD Tiddly Wiki is to give users a single repository for their GTD lists and support materials so they can create/edit lists, and then print directly to 3x5 cards for use with the HipsterPDA.

config.options.chkHttpReadOnly = true;
// Eric Shulman - ELS Design Studios\n// "Mixed HTML and wiki-style rendering" Plug-in for TiddlyWiki version 1.2.25 or above\nversion.extensions.HTMLFormatting = {major: 1, minor: 0, revision: 1, date: new Date(2005,7,26)};\nwindow.coreWikify=window.wikify;\nwindow.wikify = function(tiddlerText,theViewer,highlightText,highlightCaseSensitive)\n{\n var startHTML = tiddlerText.indexOf('<'+'html'+'>');\n var endHTML = tiddlerText.lastIndexOf('<'+'/html'+'>');\n if (startHTML==-1) // bypass HTML parsing\n { coreWikify(tiddlerText,theViewer,highlightText,highlightCaseSensitive); return; }\n if (startHTML>0) // wikify everything up to HTML tag\n coreWikify(tiddlerText.substr(0,startHTML-1),theViewer,highlightText,highlightCaseSensitive);\n if (startHTML!=-1) // browser parse everything between HTML and /HTML tags (or end of text)\n {\n var HTMLText = tiddlerText.substr(startHTML);\n if (endHTML!=-1) HTMLText = tiddlerText.substring(startHTML,endHTML+7);\n // suppress wiki-style literal handling of newlines\n if (HTMLText.indexOf('<hide linebreaks>')!=-1) HTMLText=HTMLText.replace(regexpNewLine,' ');\n // strip any carriage returns added by Internet Explorer's textarea edit field\n HTMLText=HTMLText.replace(regexpCarriageReturn,'');\n // encode newlines as \sn so Internet Explorer's HTML parser won't eat them\n HTMLText=HTMLText.replace(regexpNewLine,'\s\sn');\n // encode macro brackets (<< and >>) so HTML parser won't eat them\n HTMLText=HTMLText.replace(/<</g,'%macro(').replace(/>>/g,')%');\n // create a span to hold browser-parsed DOM objects\n var newSpan = createTiddlyElement(theViewer,"span",null,null,null);\n // give HTML source to browser's parser (builds DOM nodes)\n newSpan.innerHTML=HTMLText;\n newSpan.normalize();\n // walk resulting node tree and call wikify() on each text node\n wikifyTextNodes(newSpan,highlightText,highlightCaseSensitive);\n }\n if (endHTML!=-1) // wikify everything after HTML tag\n coreWikify(tiddlerText.substr(endHTML+8),theViewer,highlightText,highlightCaseSensitive);\n // DEBUG showNodeTree(theViewer.parentNode,theViewer);\n\n}\n\nfunction wikifyTextNodes(theNode,highlightText,highlightCaseSensitive)\n{\n // pre-order traversal\n for (var i=0;i<theNode.childNodes.length;i++)\n {\n var theChild=theNode.childNodes.item(i);\n wikifyTextNodes(theChild,highlightText,highlightCaseSensitive);\n if (theChild.nodeName=='#text')\n {\n // don't bother to wikify pure whitespace nodes (if any)\n if (theChild.nodeValue.replace(/\ss+/,"").replace(/\st+/,"").length!=0)\n {\n // DEBUG alert('wikify text: "'+theChild.nodeValue.replace(regexpBackSlashEn,'\sn')+'"');\n var theClass =,6)=="viewer"?"viewer";\n var newNode = createTiddlyElement(null,"span",null,theClass,null);\n // decode newlines and macro brackets for wikification\n var theText = theChild.nodeValue.replace(regexpBackSlashEn,'\sn').replace(/\s%macro\s(/g,'<<').replace(/\s)\s%/g,'>>');\n coreWikify(theText,newNode,highlightText,highlightCaseSensitive);\n theNode.replaceChild(newNode,theChild);\n }\n }\n }\n}\n\n// Use this function to generate a report of the DOM tree objects starting from a given node.\n// place = where to display DOM object report, theNode = root of DOM object tree to be reported\nfunction showNodeTree(place,theNode)\n{\n createTiddlyElement(place,"HR",null,null,null);\n var theReport = createTiddlyElement(place,"div",null,null,null);\n walkNodeTree(theReport,theNode,'');\n}\nfunction walkNodeTree(theOutput,theNode,thePrefix)\n{\n var msg=thePrefix+':'+((theNode.nodeName=='#text')?' ':theNode.nodeName);\n var href = (theNode.href)?' href='+theNode.href:'';\n var id = (' id=''';\n var val = (' ''='+theNode.value:''):'';\n var text = (theNode.nodeName=='#text')?'"'+theNode.nodeValue.replace(regexpBackSlashEn,'\sn')+'"':'';\n if ( (theNode.nodeName!='B')\n &&(theNode.nodeName!='I')\n &&(theNode.nodeName!='TBODY')\n &&(theNode.nodeName!='SPAN'))\n createTiddlyElement(theOutput,"div",null,null,msg+val+id+href+text);\n for (var i=0;i<theNode.childNodes.length;i++)\n {\n var theChild=theNode.childNodes.item(i);\n var childmsg=msg;\n if (theNode.childNodes.length>1) childmsg+='['+(i+1)+']';\n walkNodeTree(theOutput,theChild,childmsg);\n }\n}\n
[[Introduction]] [[Research]] [[Bio]] [[Contact]] [[Selected Papers|selected recent papers]] [[Teaching]] [[Pictures]] [[Useful Links]] [[Group]] [[Grants]] [[Software]] [[A cute Packers fan]]
@media screen\n{\n\nbody {\n font: 14px/125% "Lucida Grande", "Trebuchet MS", "Bitstream Vera Sans", Verdana, Helvetica, sans-serif;\n}\n\n}\n\n@media print\n{\n\nbody {\n font-size: 6.2pt;\n font-family: "Lucida Grande", "Bitstream Vera Sans", Helvetica, Verdana, Arial, sans-serif;\n}\n\n}\n\n.viewer {\n text-align: justify;\n}\n\n
[[Programming]]\n[[Research Links]]\n[[Conference Stuff]]\n[[LaTeX and friends]]\n
<html>\n<iframe src=";height=614" style=" border-width:0 " width="640" frameborder="0" height="614"></iframe> \n</html>
''Abstract''\n\nWe propose an optimization algorithm to solve the Brachytherapy Seed Localization problem in prostate brachytherapy. Our algorithm is based on novel geometric approaches to exploit the special structure of the problem and relies on a number of key observations which help us formulate the optimization problem as a minimization Integer Program. Our IP model precisely defines the feasibility polyhedron for this problem, the solution to its corresponding linear program is rounded to yield an integral solution to the problem of determining correspondences between seeds in multiple projection images. The algorithm is efficient in theory as well as in practice and performs well on simulation data (98% accuracy) and real X-ray images (95% accuracy). We present in detail the underlying ideas and an extensive set of performance evaluations based on our implementation. \n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nRecent research in biology has indicated correlations between the movement patterns of functional sites (such as replication sites in DNA) and zones of genetic activity within a nucleus. A detailed study and analysis of the motion dynamics of these sites can reveal an interesting insight into their role in DNA replication and function. In this paper, we propose a suite of novel techniques to determine, analyze, and interpret the mobility patterns of functional sites. Our algorithms are based on interesting ideas from theoretical computer science and learning and provide for the first time the tools to interpret the seemingly stochastic motion patterns of the functional sites within the nucleus in terms of a set of tractable `patterns' which can then be analyzed to understand their biological significance.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nBiplane angiographic imaging is a primary method for visual and quantitative assessment of the vasculature. In order to reliably reconstruct the three dimensional (3D) position, orientation, and shape of the vessel structure, a key problem is to determine the rotation matrix ''R'' and the translation vector ''t'' which relate the two coordinate systems. This so-called Imaging Geometry Determination problem is well studied in the medical imaging and computer vision communities and a number of interesting approaches have been reported. Each such technique determines a solution which yields 3D vasculature reconstructions with errors comparable to other techniques. From the literature, we see that different techniques with different optimization strategies yield reconstructions with equivalent errors. We have investigated this behavior, and it appears that the error in the input data leads to this equivalence effectively yielding what we call the solution space of feasible geometries, i.e., geometries which could be solutions given the error or uncertainty in the input image data. In this paper, we lay the theoretical framework for this concept of a solution space of feasible geometries using simple schematic constructions, deriving the underlying mathematical relationships, presenting implementation details, and discussing implications and applications of the proposed idea. Because the solution space of feasible geometries encompasses equivalent solutions given the input error, the solution space approach can be used to evaluate the precision of calculated geometries or 3D data based on known or estimated uncertainties in the input image data. We also use the solution space approach to calculate an imaging geometry, i.e., a solution.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe study the problem of classifying an autistic group from controls using structural image data alone, a task that requires a clinical interview with a psychologist. Because of the highly convoluted brain surface topology, feature extraction poses the first obstacle. A clinically relevant measure called the cortical thickness has shown promise but yields a rather challenging learning problem -- where the dimensionality of the distribution is extremely large and the training set is small. By observing that each point on the brain cortical surface may be treated as a 'hypothesis', we propose a new algorithm for LPBoosting (with truncated neighborhoods) for this problem. In addition to learning a high quality classifier, our model incorporates topological priors into the classification framework directly -- that two neighboring points on the cortical surface (hypothesis pairs) must have similar discriminative qualities. As a result, we obtain not just a label {+1,-1} for test items, but also an indication of the 'discriminative regions' on the cortical surface. We discuss the formulation and present interesting experimental results.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n''[[Copyright]]''
Copyright and all rights therein are retained by authors or by other copyright holders (e.g., IEEE, ACM or other publishers).\n\nThe documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
[[ACM Events|]]\n[[Medical Imaging Conferences|]]\n[[Conference listings|]]\n[[AI Conference rankings|]]\n[[Journal rankings|]]
''Abstract''\n\nWe study the cosegmentation problem where the objective is to segment the same object (i.e., region) from a pair of images. The segmentation for each image can be cast using a partitioning/segmentation function with an additional constraint that seeks to make the histograms of the segmented regions (based on intensity and texture features) similar. Using Markov Random Field (MRF) energy terms for the simultaneous segmentation of the images together with histogram consistency requirements using the squared \sell_2 (rather than \sell_1 ) distance, after linearization and adjustments, yields an optimization model with some interesting combinatorial properties. We discuss these properties which are closely related to certain relaxation strategies recently introduced in computer vision. Finally, we show some experimental results of the proposed approach.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Oral Presentation Slides|]] (latex sources also available by request)\n\nCode (coming soon), Data (available, send email)\n\n''[[Copyright]]''
''Abstract''\n\n Graph-cuts based algorithms are effective for a variety of segmentation tasks in computer vision. Ongoing research is focused toward making the algorithms even more general, as well as to better understand their behavior with respect to issues such as the choice of the weighting function and sensitivity to placement of seeds. In this paper, we investigate in the context of neuroimaging segmentation, the sensitivity/stability of the solution with respect to the input "labels" or seeds. In particular, as a form of parameter learning, we are interested in the effect of allowing the given set of labels (and consequently, the response/statistics of the weighting function) to vary for obtaining lower energy segmentation solutions. This perturbation leads to a refined label set (or parameters) better suited to the input image, yielding segmentations that are less sensitive to the set of labels or seeds provided. Our proposed algorithm (using Parametric Pseudoflow) yields improvements over graph-cuts based segmentation with a fixed set of labels. We present experiments on about 450 3-D brain image volumes demonstrating the efficacy of the algorithm.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Code (for the underlying parametric max-flow)|]] GUI based code link\n\n''[[Copyright]]''
''Abstract''\n\nAbstract. We present a novel computational framework for characterizing signal in brain images via nonlinear pairing of critical values of the signal. Among the astronomically large number of different pairings possible, we show that representations derived from specific pairing schemes provide concise representations of the image. This procedure yields a min-max diagram of the image data. The representation turns out to be especially powerful in discriminating image scans obtained from different clinical populations, and directly opens the door to applications in a variety of learning and inference problems in biomedical imaging. It is noticed that this strategy significantly departs from the standard image analysis paradigm – where the "mean" signal is used to characterize an ensemble of images. This offers robustness to noise in subsequent statistical analyses, for example; however, the attenuation of the signal content due to averaging makes it rather difficult to identify subtle variations. The proposed topologically oriented method seeks to address these limitations by characterizing and encoding topological features or attributes of the image. As an application, we have used this method to characterize cortical thickness measures along brain surfaces in classifying autistic subjects. Our promising experimental results provide evidence of the power of this representation.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe consider the ensemble clustering problem where the task is to `aggregate' multiple clustering solutions into a single consolidated clustering that maximizes the shared information among given clustering solutions. We obtain several new results for this problem. First, we note that the notion of agreement under such circumstances can be better captured using an agreement measure based on a $2D$ string encoding rather than voting strategy based methods proposed in literature. Using this generalization, we first derive a nonlinear optimization model to maximize the new agreement measure. We then show that our optimization problem can be transformed into a strict $0$-$1$ Semidefinite Program (SDP) via novel convexification techniques which can subsequently be relaxed to a polynomial time solvable SDP. Our experiments indicate improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. We discuss evaluations on clustering and image segmentation databases. \n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]] (latex sources also available by request)\n\n[[Code|]] \n\n''[[Copyright]]''
''Abstract''\n\nThis paper proposes an new optimization framework for tomographic reconstruction of 3D volumes when only a limited number of projection views are available. The problem has several important clinical applications spanning coronary angiographic imaging, breast tomosynthesis and dental imaging. We first show that the limited view reconstruction problem can be formulated as a `constrained' version of the metric labeling problem. This lays the groundwork for a linear programming framework that brings together metric labeling classification and classical algebraic tomographic reconstruction (ART) in a unified model. If the imaged volume is known to be comprised of a finite set of attenuation coefficients, given a regular limited view reconstruction as an input, we can view it as a `denoising' task -- where voxels must be reassigned subject to maximally maintaining consistency with the input reconstruction and the objective of ART simultaneously. The approach can reliably reconstruct volumes with several multiple contrast objects as well as the simpler binary contrast case which can be solved near-optimally in practice. We present evaluations on cone bean computed tomography, it can also be readily extended to other tomographic modalities as a viable approach for limited-view tomographic reconstruction.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe study the so-called Generalized Median graph problem where the task is to to construct a prototype (i.e., a `model') from an input set of graphs. The problem finds applications in many vision (e.g., object recognition) and learning problems where graphs are increasingly being adopted as a representation tool. Existing techniques for this problem are evolutionary search based; in this paper, we propose a polynomial time algorithm based on a linear programming formulation. We present an additional bi-level method to obtain solutions arbitrarily close to the optimal in non-polynomial time (in worst case). Within this new framework, one can optimize edit distance functions that capture similarity by considering vertex labels as well as the graph structure simultaneously. In context of our motivating application, we discuss experiments on molecular image analysis problems - the methods will provide the basis for building a topological map of all pairs of the human chromosome. \n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]] [[Poster (A4 size)|]] (latex sources also available by request)\n\n''[[Copyright]]''
''Abstract''\n\nThis paper proposes a new discrete optimization framework for tomographic reconstruction and segmentation of CT volumes when only a few projection views are available. The problem has important clinical applications in coronary angiographic imaging. We first show that the limited view reconstruction and segmentation problem can be formulated as a 'constrained' version of the metric labeling problem. This lays the groundwork for a linear programming framework that brings metric labeling classification and classical algebraic tomographic reconstruction (ART) together in a unified model. If the imaged volume is known to be comprised of a finite set of attenuation coefficients (a realistic assumption), given a regular limited view reconstruction, we view it as a task of voxels reassignment subject to maximally maintaining consistency with the input reconstruction and the objective of ART simultaneously. The approach can reliably reconstruct (or segment) volumes with several multiple contrast objects. We present evaluations using experiments on cone beam computed tomography.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\n[[PDF|]]\n\n[[bibtex|]]\n\nCode (coming soon), Data (available, send email)\n\n''[[Copyright]]''
I am assuming you can get to MSC (Medical Sciences Center), see or on [[Google Maps|]] (''click the MSC marker in the map for more information'').
[[Dictionary of Algorithms and Data Structures|]]\n[[Geometry in Medical Imaging|]]\n[[Stony Brook Algorithm Repository|]]\n[[Compendium of NP Optimization Problems|]]\n[[Opt online|]]\n[[Handbook of Computational Geometry|]]\n[[Matrix cookbook|]]\n
''Abstract''\n\nSpatially augmented LP Boosting is a machine learning technique with a bias to make it specially suited to 3D medical imaging problems; the algorithm tries to produce a linear classifier which corresponds to "smooth" contiguous regions in the brain. The "spatial augmentation" is achieved by penalizing certain variations between weights placed on neighboring voxels concurrently with the standard learning phase, i.e., smoothness is injected into the trade-off between L1-norm sparsity and training set clssification margin, leading to a 3-way trade-off.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
[[PSTricks|]]\n[[PGF/Tikz|]]\n[[Seminar|]]\n[[Prosper|]]\n[[Beamer|]]\n[[Ipe|]]\n[[RefTex|]]\n[[Bibtex for Pubmed|]]\n[[UW-Biostat & Med. Info LaTeX letters|]] (very crude, feel free to hack)\n[[NIH Biosketch|]]
''Abstract''\n\nabstract goes here\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Slides|]]\n\nCode and project details\n\n''[[Copyright]]''
Look under the details section of the corresponding paper.
''Abstract''\n\nWe study the problem of segmenting specific white matter structures of interest from Diffusion Tensor (DTMR) images of the human brain. This is an important requirement in many Neuroimaging studies: for instance, to evaluate whether a brain structure exhibits group level differences as a function of disease in a set of images. Typically, interactive expert guided segmentation has been the method of choice for such applications, but this is tedious for larger datasets common today (> 200 images). To address this problem, the strategy we adopt is to endow an image segmentation algorithm with "advice" encoding some global characteristics of the region(s) we want to extract. This is accomplished by constructing (using expert-segmented images) an epitome of a specific region - as a histogram over a bag of 'words' (e.g., suitable feature descriptors). Now, given such a representation, the problem reduces to segmenting a new 3-D DTMR image of the brain with additional constraints that enforce consistency between the segmented foreground and the pre-specified histogram (over features). We present combinatorial approximation algorithms to incorporate such domain specific constraints for Markov Random Field (MRF) segmentation. Making use of recent results on concurrent segmentation of multiple images, we derive effective solution strategies for our problem. We describe our main ideas, provide analysis of solution quality, and present promising experimental evidence showing that many structures of interest in Neuroscience can be extracted reliably from 3-D DTMR brain image volumes using our algorithm.\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Slides|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
''Abstract''\n\nAlzheimer’s Disease (AD) and other neurodegenerative diseases affect over 20 million people worldwide, and this number is projected to significantly increase in the coming decades. Proposed imaging based markers have shown steadily improving levels of sensitivity/specificity in classifying individual subjects as AD or normal. Several of these efforts have utilized statistical machine learning techniques, using brain images as input, as means of deriving such AD related markers. A common characteristic of this line of research is a focus on either (1) using a single imaging modality for classification, or (2) incorporating several modalities, but reporting separate results for each. One strategy to improve on the success of these methods is to leverage all available imaging modalities together in a single automated learning framework. The rationale is that some subjects may show signs of pathology in one modality but not in another – by combining all available images a clearer view of the progression of disease pathology will emerge. Our method is based on the Multi Kernel Learning (MKL) framework, which allows the inclusion of an arbitrary number of views of the data in a maximum margin, kernel learning framework. The principal innovation behind MKL is that it learns an optimal combination of kernel (similarity) matrices while simultaneously training a classifier. In classification experiments MKL outperformed an SVM trained on all available features by 3% – 4%. We are especially interested in whether such markers are capable of identifying early signs of the disease. To address this question, we have examined whether our multi-modal disease marker (MMDM) can predict conversion from Mild Cognitive Impairment (MCI) to AD. Our experiments reveal that this measure shows significant group differences between MCI subjects who progressed to AD, and those who remained stable for 3 years. These differences were most significant in MMDM based on imaging data. We also discuss the relationship between our MMDM and an individual’s conversion from MCI to AD.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
''Abstract''\n\nIn this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Code|]] \n\n''[[Copyright]]''
''Abstract''\n\nMaximal margin based frameworks have emerged as a powerful tool for supervised learning. The extension of these ideas to the unsupervised case, however, is problematic since the underlying optimization entails a discrete component. In this paper, we first study the computational complexity of maximal hard margin clustering and show that the hard margin clustering problem can be precisely solved in $O(n^{d+2})$ time where $n$ is the number of the data points and $d$ is the dimensionality of the input data. However, since it is well known that many datasets commonly express themselves primarily in far fewer dimensions, our interest is in evaluating if a careful use of dimensionality reduction can lead to practical and effective algorithms. We build upon these observations and propose a new algorithm that gradually increases the number of features used in the separation model in each iteration, and analyze the convergence properties of this scheme. We report on promising numerical experiments based on a ‘truncated’ version of this approach. Our experiments indicate that for a variety of datasets, good solutions equivalent to those from other existing techniques can be obtained in significantly less time.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe propose a new algorithm for learning kernels for variants of the Normalized Cuts (N-cuts) objective, i.e., given a set of training examples with known partitions, how should a basis set of similarity functions be combined to induce N-cuts favorable distributions. Such a procedure facilitates design of good affinity matrices. It also helps assess the importance of different feature types for discrimination. Rather than formulating the learning problem in terms of the spectral relaxation, the alternative we pursue here is to work in the original discrete setting (i.e., the relaxation occurs much later). We show that this strategy is useful: while the initial specification seems rather difficult to optimize efficiently, a set of manipulations reveal a related model which permits a nice SDP relaxation. A salient feature of our model is that the eventual problem size is only a function of the number of input kernels and not the training set size. This relaxation also allows strong optimality guarantees, if certain conditions are satisfied. We show that the sub-kernel weights obtained provide a complementary approach for MKL based methods. Our experiments on Caltech101 and ADNI (a brain imaging dataset) show that the quality of solutions is competitive with the state-of-the-art.\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Code and project details|]]\n\n[[Talk|]]\n\n''[[Copyright]]''
''Abstract''\n\nAlzheimer's disease (AD) research has recently witnessed a great deal of activity focused on developing new statistical learning tools for automated inference using imaging data. The workhorse for many of these techniques is the Support Vector Machine (SVM) framework (or more generally kernel based methods). Most of these require, as a first step, specification of a kernel matrix K between input examples (i.e., images). The inner product between images Ii and Ij in a feature space can generally be written in closed form, and so it is convenient to treat K as "given". However, in certain neuroimaging applications such an assumption becomes problematic. As an example, it is rather challenging to provide a scalar measure of similarity between two instances of highly attributed data such as cortical thickness measures on cortical surfaces. Note that cortical thickness is known to be discriminative for neurological disorders, so lever- aging such information in an inference framework, especially within a multi-modal method, is potentially advantageous. But despite being clinically meaningful, relatively few works have successfully exploited this measure for classification or regression. Motivated by these applications, our paper presents novel techniques to compute similarity matrices for such topologically- based attributed data. Our ideas leverage recent developments to characterize signals (e.g., cortical thickness) motivated by the persistence of their topological features, leading to a scheme for simple constructions of kernel matrices. As a proof of principle, on a dataset of 356 subjects from the ADNI study, we report good performance on several statistical inference tasks without any feature selection, dimensionality reduction, or parameter tuning. \n\n[[PDF|]]\n\n[[bibtex|]]\n\n''[[Copyright]]''
''Abstract''\n\nThis paper is focused on the Co-segmentation problem -- where the objective is to segment a similar object from a pair of images. The background in the two images may be arbitrary; therefore, simultaneous segmentation of both images must be performed with a requirement that the appearance of the two sets of foreground pixels in the respective images are consistent. Existing approaches [1, 2] cast this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized difference of the two histograms -- assuming a Gaussian prior on the foreground appearance or by calculating the sum of squared differences. Both are interesting formulations but lead to difficult optimization problems, due to the presence of the second (histogram difference) term. The model proposed here bypasses measurement of the histogram differences in a direct fashion; we show that this enables obtaining efficient solutions to the underlying optimization model. Our new algorithm is similar to the existing methods in spirit, but differs substantially in that it can be solved to optimality in polynomial time using a maximum flow procedure on an appropriately constructed graph. We discuss our ideas and present promising experimental results.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Oral Presentation Slides|]] \n\n[[Code|]] [[Data|]]\n\n@@color(blue):''The code is heavily based on [[Hochbaum's Pseudoflow|]]. If you make use of this distribution, we request that you acknowledge and refer the Pseudoflow paper also (Operations Research, Volume 58(4), 992-1009, 2008).'' @@\n\n''[[Copyright]]''
''Abstract''\n\nOur primary interest is in generalizing the problem of Cosegmentation to a large group of images, that is, concurrent segmentation of common foreground region(s) from multiple images. We further wish for our algorithm to offer scale invariance (foregrounds may have arbitrary sizes in different images) and the running time to increase (no more than) near linearly in the number of images in the set. What makes this setting particularly challenging is that even if we ignore the scale invariance desiderata, the Cosegmentation problem, as formalized in many recent papers (except [1]), is already hard to solve optimally in the two image case. A straightforward extension of such models to multiple images leads to loose relaxations; and unless we impose a distributional assumption on the appearance model, existing mechanisms for image-pair-wise measurement of foreground appearance variations lead to significantly large problem sizes (even for moderate number of images). This paper presents a surprisingly easy to implement algorithm which performs well, and satisfies all requirements listed above (scale invariance, low computational requirements, and viability for the multiple image setting). We present qualitative and technical analysis of the properties of this framework.\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Slides|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
''Abstract''\n\nDiffusion Tensor Imaging (DTI) provides estimates of local directional information regarding paths of white matter tracts in the human brain. An important problem in DTI is to infer tract connectivity (and networks) from given image data. We propose a method that infers high-level network structures and connectivity information from Diffusion Tensor images. Our algorithm extends principles from perceptual contours to construct a weighted line-graph based on how well the tensors agree with a set of proposal curves (regularized by length and curvature). The problem of extracting high-level anatomical connectivity is then posed as an optimization problem over this curvature-regularizing graph — which gives subgraphs which comprise a representation of the tracts' network topology. We present experimental results and an open-source implementation of the algorithm.\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Slides|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence — the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. \n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Slides|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
I received a Ph.D in 2007 in [[Computer Science|]] at [[State University of New York at Buffalo|]]. There, I collaborated extensively with [[Toshiba Stroke Research Center|,template&cpid=6383]]. Earlier, I received a Masters in 2004. Earlier, I finished my undergraduate degree in Computer Science and Engineering in 2002 from [[Uttar Pradesh Technical University|]].\n\nIf you are looking for additional information on me, go [[here|Personal]]. \n\n
We develop new algorithms to analyze and exploit the joint subspace structure of a set of related images to facilitate the process of concurrent segmentation of a large set of images. Most existing approaches for this problem are either limited to extracting a single similar object across the given image set or do not scale well to a large number of images containing multiple objects varying at different scales. One of the goals of this paper is to show that various desirable properties of such an algorithm (ability to handle multiple images with multiple objects showing arbitrary scale variations) can be cast elegantly using simple constructs from linear algebra: this significantly extends the operating range of such methods. While intuitive, this formulation leads to a hard optimization problem where one must perform the image segmentation task together with appropriate constraints which enforce desired algebraic regularity (e.g., common subspace structure). We propose efficient iterative algorithms (with small computational requirements) whose key steps reduce to objective functions solvable by max-flow and/or nearly closed form identities. We study the qualitative, theoretical, and empirical properties of the method, and present results on benchmark datasets.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
Multiple Kernel Learning (MKL) generalizes SVMs to the setting where one simultaneously trains a linear classifier and chooses an optimal combination of given base kernels. Model complexity is typ- ically controlled using various norm regularizations on the base kernel mixing coefficients. Existing methods neither regularize nor exploit potentially useful information pertaining to how kernels in the input set ‘interact’; that is, higher order kernel-pair relationships that can be easily obtained via unsupervised (similarity, geodesics), supervised (correlation in errors), or domain knowledge driven mechanisms (which features were used to construct the kernel?). We show that by substituting the norm penalty with an arbitrary quadratic function Q \ssucceq 0, one can impose a desired covariance structure on mixing weights, and use this as an inductive bias when learning the concept. This for- mulation significantly generalizes the widely used 1- and 2-norm MKL objectives. We explore the model’s utility via experiments on a challenging Neuroimaging problem, where the goal is to pre- dict a subject’s conversion to Alzheimer’s Disease (AD) by exploiting aggregate information from many distinct imaging modalities. Here, our new model outperforms the state of the art (p-values < 10−3). We briefly discuss ramifications in terms of learning bounds (Rademacher complexity).\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Code|]]\n\n''[[Copyright]]''
My research is in ''Computer Vision'', ''Medical Image Analysis'', and some aspects of ''Machine Learning''. \nIn particular, I am interested in problems motivated from image data with a distinct optimization and/or geometric flavor.\n\nPlease take a look at some [[selected recent papers]] to get a sense of what ''[[our group|#Group]]'' is working on. \n\nIf you are a CS student interested in graduate research in my lab, you should have demonstrable preparedness in one or more of the following areas:\n#Algorithms for Image Segmentation, Registration,...\n#Kernel methods \n#Statistical Image Analysis \n#Applied aspects of Convex optimization or Combinatorial methods\n#Neuroscience and/or Neuroimaging\n\n\n''Recruiting new students?'' If there is sufficient funding, I will take on one or two graduate students each year. But outside of exceptional cases, these slots are for individuals who have collaborated with me on a research problem for a semester (e.g., in an independent study).\n\n''Summer internship?'' I do ''NOT'' offer summer internships and cannot reply to template letters requesting such positions. If you are already at UW, see below.\n\n@@color(red):''Undergraduate Research (for UW students only):'' As part of a [[NSF REU|]] project, several positions are available on a rolling basis. Get in touch with me if you are interested.@@\n
Welcome to the homepage of Vikas Singh.\n\nI am an ==Assistant== Associate Professor in [[Biostatistics & Med. Informatics|]] and [[Computer Sciences|]] departments at [[University of Wisconsin-Madison|]]. \n\n
Hypothesis testing on signals defined on surfaces (such as the cortical surface) is a fundamental component of a variety of studies in Neuroscience. The goal here is to identify regions that exhibit changes as a function of the clinical condition under study. As the clinical questions of interest move towards identifying very early signs of diseases, the corresponding statistical differences at the group level invariably become weaker and increasingly hard to identify. Indeed, after a multiple comparisons correction is adopted (to account for correlated statistical tests over all surface points), very few regions may survive. In contrast to hypothesis tests on point-wise measurements, in this paper, we make the case for performing statistical analysis on multi-scale shape descriptors that characterize the local topological context of the signal around each surface vertex. Our descriptors are based on recent results from harmonic analysis, that show how wavelet theory extends to non-Euclidean settings (i.e., irregular weighted graphs). We provide strong evidence that these descriptors successfully pick up group-wise differences, where traditional methods either fail or yield unsatisfactory results. Other than this primary application, we show how the framework allows performing cortical surface smoothing in the native space without mapping to a unit sphere.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Project Webpage|]]\n\n''[[Copyright]]''
We study the problem of interactive segmentation and contour completion for multiple objects. The form of constraints our model incorporates are those coming from user scribbles (interior or exterior constraints) as well as information regarding the topology of the 2-D space after partitioning (number of closed contours desired). We discuss how concepts from discrete calculus and a simple identity using the Euler characteristic of a planar graph can be utilized to derive a practical algorithm for this problem. We also present specialized branch and bound methods for the case of single contour completion under such constraints. On an extensive dataset of ~1000 images, our experiments suggest that a small amount of side knowledge can give strong improvements over fully unsupervised contour completion methods. We show that by interpreting user indications topologically, user effort is substantially reduced. \n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape’s local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive non Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.\n\n[[PDF|]]\n\n[[bibtex|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
Matching one set of objects to another is a ubiquitous task in machine learning and computer vision that often reduces to some form of the quadratic assignment problem (QAP). The QAP is known to be notoriously hard, both in theory and in practice. Here, we investigate if this difficulty can be partly mitigated when some additional piece of information is available: (a) that all QAP instances of interest come from the same application, and (b) the correct solution for a set of such QAP instances is given. We propose a new approach to accelerate the solution of QAPs based on learning parameters for a modified objective function from prior QAP instances. A key feature of our approach is that it takes advantage of the algebraic structure of permutations, in conjunction with special methods for optimizing functions over the symmetric group \sm{\sSb_n} in Fourier space. Experiments show that in practical domains the new method can outperform existing approaches. \n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\n[[Poster|]]\n\n[[Slides|]]\n\n[[Code|]]\n\n[[Presentation|]]\n\n''[[Copyright]]''
[[C++|]]\n[[STL|]]\n[[Linux Man pages|]]\n[[CGAL Manual|]]\n[[LEDA|]]\n[[CLP User Manual|]]\n[[Subversion notes|]]\n[[VTK|]]\n[[GLPK|]]\n[[GSL|]]\n[[CVX|]]\n[[Yalmip|]]\n[[CplexMEX|]]\n[[CplexInt|]]\n[[SDPT3|]]\n[[Sedumi|]]\n[[A list of Matlab interfaces to solvers|]]\n[[MTL|]]\n[[STL Examples|]]\n[[Wild Magic|]]\n[[MIPAV|]]\n[[C code for Numerical Computation|]] (old)\n[[Tcl/Tk|]]\n[[C++ User's journal|]]\n[[Advanced Bash|]]\n[[Bash Programming tutorial|]]\n[[My .emacs|]] (almost nothing is original...)
I grew up in [[Calcutta|]] (also known as Kolkata). I finished high school at [[Vikas Vidyalaya|]]. Some of my pictures can be found [[here|Pictures]].\n\n\n
''[[5795 Medical Sciences Center|Directions to MSC 5795]]''\n1300 University Avenue\nMadison, WI 53706.\nemail: userid: vsingh, domain: biostat wisc edu\n\nPhone: ''(608)262-8875''. @@color(red):(please do NOT leave a message here, I NEVER check it)@@\nFax: (608)265-7916\n\n \n\n\n\n
''Abstract''\n\nMultiple hypothesis testing is a significant problem in nearly all neuroimaging studies. In order to correct for this phenomena, we require a reliable estimate of the Family-Wise Error Rate (FWER). The well known Bonferroni correction method, while simple to implement, is quite conservative, and can substantially under-power a study because it ignores dependencies between test statistics. Permutation testing, on the other hand, is an exact, non-parametric method of estimating the FWER for a given α-threshold, but for acceptably low thresholds the computational burden can be prohibitive. In this paper, we show that permutation testing in fact amounts to populating the columns of a very large matrix P. By analyzing the spectrum of this matrix, under certain conditions, we see that P has a low-rank plus a low-variance residual decomposition which makes it suitable for highly sub–sampled — on the order of 0.5% -- matrix completion methods. Based on this observation, we propose a novel permutation testing methodology which offers a large speedup, without sacrificing the fidelity of the estimated FWER. Our evaluations on four different neuroimaging datasets show that a computational speedup factor of roughly 50x can be achieved while recovering the FWER distribution up to very high accuracy. Further, we show that the estimated \salpha-threshold is also recovered faithfully, and is stable.\n\n[[PDF|]]\n\n[[Longer version|]]\n\n[[bibtex|]]\n\nSlides\n\n[[Code and project details|]]\n\n[[NITRC distribution|]]\n\n''[[Copyright]]''
''Abstract''\n\nWe study the problem of online subspace learning in the context of sequential observations involving structured perturbations. In online subspace learning, the observations are an unknown mixture of two components presented to the model sequentially -- the main effect which pertains to the subspace and a residual/error term. If no additional requirement is imposed on the residual, it often corresponds to noise terms in the signal which were unaccounted for by the main effect. To remedy this, one may impose "structural" contiguity, which has the intended effect of leveraging the secondary terms as a covariate that helps the estimation of the subspace itself, instead of merely serving as a noise residual. We show that the corresponding online estimation procedure can be written as an approximate optimization process on a Grassmannian. We propose an efficient numerical solution, GOSUS, Grassmannian Online Subspace Updates with Structured-sparsity, for this problem. GOSUS is expressive enough in modeling both homogeneous perturbations of the subspace and structural contiguities of outliers, and after certain manipulations, solvable via an alternating direction method of multipliers (ADMM). We evaluate the empirical performance of this algorithm on two problems of interest: online background subtraction and online multiple face tracking, and demonstrate that it achieves competitive performance with the state-of-the-art in near real time.\n\n[[PDF|]]\n\n[[Supplement|]]\n\n[[bibtex|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
''Abstract''\n\nThe problem of matching not just two, but m different sets of objects to each other arises in many contexts, including finding the correspondence between feature points across multiple images in computer vision. At present it is usually solved by matching the sets pairwise, in series. In contrast, we propose a new method, Permutation Synchronization, which finds all the matchings jointly, in one shot, via a relaxation to eigenvector decomposition. The resulting algorithm is both computationally efficient, and, as we demonstrate with theoretical arguments as well as experimental results, much more stable to noise than previous methods.\n\n[[PDF|]]\n\n[[Supplement|]]\n\n[[bibtex|]]\n\n[[Code and project details|]]\n\n''[[Copyright]]''
Spring 2015: [[CS 766: Computer Vision|]]. \n\nIn the past...\nSpring 2014: [[CS 766: Computer Vision|]]. \nSpring 2013: [[BMI/CS 767: Methods in Medical Image Anal.|]]\nSpring 2011: [[CS 766: Computer Vision|]]. \nSpring 2011: [[CS 540: Intro to AI|]]. \nSpring 2010: [[CS 540: Intro to AI|]].\nSpring 2009: [[CS 638: Methods in Medical Image Anal|]].\nSpring 2008: [[CS 638: Methods in Medical Image Anal|]]. \n\n[[Other AI courses offered here at Wisconsin|]]\n[[AI Qualifier homepage|]]
The backdrop is the canal along Nyhavn in Copenhagen [credit: AM]\n\n[img[DSC01894.JPG]]\n\nThe backdrop is kinkaku-ji temple in Kyoto [credit: BSM]\n\n[img[vikasKyoto.JPG]]
''Abstract''\n\nCanonical correlation analysis (CCA) is a widely used statis- tical technique to capture correlations between two sets of multi-variate random variables and has found a multitude of applications in computer vision, medical imaging and machine learning. The classical formulation assumes that the data lives in a pair of vector spaces which makes its use in certain important scientific domains problematic. For instance, the set of symmetric positive definite matrices (SPD), rotations and probability distributions, all belong to certain curved Riemannian manifolds where vector-space operations are in general not applicable. Analyzing the space of such data via the classical versions of inference models is rather sub-optimal. But perhaps more importantly, since the algorithms do not respect the underlying geometry of the data space, it is hard to provide statistical guarantees (if any) on the results. Using the space of SPD matrices as a concrete example, this paper gives a principled generalization of the well known CCA to the Riemannian setting. Our CCA algorithm operates on the product Riemannian manifold representing SPD matrix-valued fields to identify meaningful statistical relationships on the product Riemannian manifold. As a proof of principle, we present results on an Alzheimer’s disease (AD) study where the analysis task in- volves identifying correlations across diffusion tensor images (DTI) and Cauchy deformation tensor fields derived from T1-weighted magnetic resonance (MR) images.
''Abstract''\n\nStatistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer's disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer's Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (WADRC), focusing on individuals labeled as having Alzheimer’s Disease (AD), Mild Cognitive Impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation.
''Abstract''\n\nPrecise detection and quantification of white matter hyperintensities (WMH) observed in T2-weighted Fluid Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) is of substantial interest in aging, and age-related neurological disorders such as Alzheimer's disease (AD). This is mainly because WMH may reflect co-morbid neural injury or cerebral vascular disease burden. WMH in the older population may be small, diffuse, and irregular in shape, and sufficiently heterogeneous within and across subjects. Here, we pose hyperintensity detection as a supervised inference problem and adapt two learning models, specifically, Support Vector Machines and Random Forests, for this task. Using texture features engineered by texton filter banks, we provide a suite of effective segmentation methods for this problem. Through extensive evaluations on healthy middle-aged and older adults who vary in AD risk, we show that our methods are reliable and robust in segmenting hyperintense regions. A measure of hyperintensity accumulation, referred to as normalized effective WMH volume, is shown to be associated with dementia in older adults and parental family history in cognitively normal subjects. We provide an open source library for hyperintensity detection and accumulation (interfaced with existing neuroimaging tools), that can be adapted for segmentation problems in other neuroimaging studies.
''Current group members''\n>[[Maxwell Collins|]] (CS Doctoral student)\n>[[Vamsi Ithapu|]] (CS Doctoral student)\n>[[Hyunwoo Kim|]] (CS Doctoral student)\n>[[WonHwa Kim|]] (CS Doctoral student)\n>Chris Lindner (CS Undergraduate student, NSF REU supported)\n>[[Deepti Pachauri|]] (CS Doctoral student)\n>Greg Plumb (CS Undergraduate student, NSF REU supported)\n> Sathya Ravi (Industrial Eng Graduate student)\n>[[Vikas Singh|]] (Principal Investigator)\n>[[Jia Xu|]] (CS Doctoral student)\n\n''Other student collaborators:'' \n> Jongho Lee (CS, jointly with Chuck Dyer and Beth Burnside)\n\n\n\n''Doctoral Alumni''\n[[Chris Hinrichs|]] (Ph.D. Computer Science, 2012). First position: CIBM Post-doctoral Fellow with [[Rob Nowak|]] and [[Tim Rogers|]]. \n\n''Masters Alumni''\n[[Kamiya Motwani|]] (MS Computer Science, 2011). First position: Software developer, Oracle. \n[[Qinyuan (Ken) Sun|]] (MS ECE, 2015, jointly with Ozioma C. Okonkwo)\n\n''Past Undergraduate Collaborators''\nJamie Warner (CS Undergraduate student, NSF REU supported)\nZeyuan Hu (BS CS/Math, 2014)\nYihong Dai (BS CS/Math, 2014)\nDavid Weber (BS Physics/Math, 2013)\n[[Patrick Blesi|]] (BS Computer Science, 2011)\nDylan Hower (undergrad/grad, Summer - Fall 2008)\n\n''Short term (non UW) visitors''\nSylvia Charchut (undergrad, Southeastern Louisiana University, Summer 2012, [[Institute in Biostatistics|]])\nJaime Torres (undergrad, University of Puerto Rico, Summer 2010, [[Institute in Biostatistics|]])\n\nAt UW, we have strong ongoing collaborations with\n[[Andy Alexander|]]\n[[Chuck Dyer|]]\n[[Moo K. Chung|]]\n[[Sterling Johnson|]].\n\n\n\n\n
''As PI'':\n\n* "Image analysis for Neuroimaging" project as part of [[Center for Predictive Computational Phenotyping (NIH BD2K center)|]] (2014, PI: Craven), with Core coPIs Barb Bendlin, Sterling Johnson and Jerry Zhu \n* ICTR Novel Methods award (2014), with Nagesh Adluru\n* NSF CCF SMALL #1320755 (2013), with Risi Kondor\n* NSF RI CAREER #1252725 (2013)\n* University of Wisconsin Graduate School/WARF Fall Competition grant (2013)\n* NIH R01 #AG/040396 (2012)\n* NSF RI REU Supplement to #1116584 (2012)\n* Wisconsin Partnership Proposal (2012), with Sterling Johnson\n* NSF RI SMALL #1116584 (2011)\n* University of Wisconsin Graduate School/WARF Fall Competition grant (2010)\n* NIH R21 #AG/034315 (2009)\n* University of Wisconsin Graduate School/WARF Fall Competition grant (2009)\n* SIIM Research grant (2009)\n* Wisconsin Comprehensive Memory Program grant (2009)\n\n----\n''As collaborator, co-investigator, or consultant'':\n\n* [[Wisconsin ADRC|]] (2014)\n* [[University of Wisconsin ICTR (CTSA award)|]] (2007)\n and various others\n----\n\nWe are very grateful to support from \n<html>\n<img src="warf-logo.JPG" width="350" /> <img src="SIIMLogo.JPG" width="150" /> <img src="wpp.jpg" width="180" />\n\n<img src="uw-adrc.png" width="250" /> <img src="nih_logo.JPG" width="150" /> <img src="logo_nsf_world.JPG" width="150"/>\n</html>\n
''[[Complete List|]]''\n\n> Deepti Pachauri, Risi Kondor, Gautam Sargur, Vikas Singh, [[Permutation Diffusion Maps (PDM) with application to the image association problem in Computer Vision]], Advances in Neural Information Processing Systems (NIPS), December 2014. [acceptance 24.7%]. \n\n> Vamsi K. Ithapu, Vikas Singh, Ozioma C. Okonkwo, Sterling C. Johnson, [[Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials]], Proceedings of Medical Image Computing And Computer Assisted Intervention (MICCAI), September 2014. [acceptance 31%]. \n\n> Hyunwoo Kim, Nagesh Adluru, Barbara B. Bendlin, Sterling C. Johnson, Baba C. Vemuri, Vikas Singh, [[Canonical Correlation Analysis on Riemannian Manifolds and its Applications]], Proceedings of International Conference on Computer Vision (ECCV), September 2014. [acceptance 26.7%].\n\n> Maxwell D. Collins, Ji Liu, Jia Xu, Lopamudra Mukherjee, Vikas Singh, [[Spectral Clustering with a Convex Regularizer on Millions of Images]], Proceedings of European Conference on Computer Vision (ECCV), September 2014. [acceptance 26.7%].\n\n> Hyunwoo Kim, Nagesh Adluru, Maxwell Collins, Moo Chung, Barbara Bendlin, Sterling C. Johnson, Richard J. Davidson, Vikas Singh, [[Multivariate General Linear Models (MGLM) on Riemannian Manifolds with Applications to Statistical Analysis of Diffusion Weighted Images]], Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2014. [also selected for ''Oral'' presentation, acceptance 5.7%]\n\n> Won Hwa Kim, Vikas Singh, Moo K. Chung, Chris Hinrichs, Deepti Pachauri, Ozioma C Okonkwo, Sterling C Johnson, [[Multi-resolutional Shape Features via Non-Euclidean Wavelets: Applications to Statistical Analysis of Cortical Thickness]], Neuroimage, Volume 93(1), 2014. [impact factor: 6.9]\n\n> Vamsi K. Ithapu, Vikas Singh, Chris Lindner, Ben Austin, Chris Hinrichs, Cynthia Carlsson, Barbara Bendlin, Sterling C. Johnson, [[Extracting and summarizing white matter hyperintensities using supervised segmentation methods in Alzheimer's disease risk and aging studies]], Human Brain Mapping, Volume 35(8), 2014. [impact factor: 6.25]\n \n> Chris Hinrichs, Vamsi Ithapu, Qinyuan Sun, Sterling C. Johnson, Vikas Singh, [[Speeding up Permutation Testing in Neuroimaging]], Advances in Neural Information Processing Systems (NIPS) 27, December 2013. [also selected for ''Oral spotlight'', acceptance 3.6%; Hinrichs/Ithapu are joint first authors]\n\n> Deepti Pachauri, Risi Kondor, Vikas Singh, "[[Solving the Multi-way Matching problem by Permutation Synchronization]], Advances in Neural Information Processing Systems (NIPS) 27, December 2013. [acceptance 20.2%]\n\n> Jia Xu, Vamsi Ithapu, Lopamudra Mukherjee, James M. Rehg, Vikas Singh, [[GOSUS: Grassmannian Online Subspace Updates with Structured-sparsity]], Proceedings of International Conference on Computer Vision (ICCV), December 2013. [acceptance 27.8%]\n\n> Won Hwa Kim, Moo K. Chung, Vikas Singh, "[[Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. [acceptance 26%]\n\n> Jia Xu, Maxwell D. Collins, Vikas Singh, "[[Incorporating User Interaction and Topological Constraints within Contour Completion via Discrete Calculus]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. [acceptance 26%]\n\n> Chris Hinrichs, Vikas Singh, Jiming Peng, Sterling C. Johnson, "[[Q-MKL: Matrix-induced Regularization in Multi-Kernel Learning with Applications to Neuroimaging]]", Advances in Neural Information Processing Systems (NIPS) 26, December 2012 [acceptance 25.2%]\n\n> Won Hwa Kim, Deepti Pachauri, Charles Hatt, Moo K. Chung, Sterling C. Johnson, Vikas Singh, "[[Wavelet based multi-scale shape features on arbitrary surfaces for cortical thickness discrimination]]", Advances in Neural Information Processing Systems (NIPS) 26, December 2012 [acceptance 25.2%]\n \n> Lopamudra Mukherjee, Vikas Singh, Jia Xu, Maxwell D. Collins, "[[Analyzing the Subspace Structure of Related Images: Concurrent Segmentation of Image Sets]]", Proceedings of European Conference on Computer Vision (ECCV), October 2012. [acceptance ~25%]\n\n> Deepti Pachauri, Maxwell D. Collins, Risi Kondor, Vikas Singh, "[[Incorporating Domain Knowledge in Matching Problems via Harmonic Analysis]]", Proceedings of International Conference on Machine Learning (ICML), June 2012. [also selected for ''Oral'' presentation, acceptance 27%, Pachauri received ICML 2012 travel award]\n\n> Maxwell D. Collins, Jia Xu, Leo J. Grady, Vikas Singh, "[[Random Walks for Multi Image Segmentation: Quasiconvexity Results and GPU-based Solutions]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2012. [acceptance 24%]\n\n> Deepti Pachauri, Chris Hinrichs, Moo K. Chung, Sterling C. Johnson, Vikas Singh, "[[Topology based Kernels with Application to inference problems in Alzheimer's disease]]", IEEE Transactions on Medical Imaging (TMI), Volume 30(10), 2011.\n\n> Lopamudra Mukherjee, Vikas Singh, Jiming Peng, "[[Scale Invariant cosegmentation for image groups]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2011. [also selected for ''Oral'' presentation, acceptance 3.5%]\n\n> Jiming Peng, Lopamudra Mukherjee, Vikas Singh, Dale Schuurmans, Linli Xu, "[[An efficient algorithm for maximal margin clustering]]", Journal of Global Optimization, Volume 52(1), 2012. \n\n> Chris Hinrichs, Vikas Singh, Guofan Xu, Sterling C. Johnson, "[[Predictive Markers for AD in a Multi-Modality Framework: An Analysis of MCI Progression in the ADNI Population]]", Neuroimage, Volume 55(2), March 2011. [impact factor: 5.74]\n\n> Kamiya Motwani, Nagesh Adluru, Chris Hinrichs, Andrew L. Alexander, Vikas Singh, "[[Epitome driven 3-D Diffusion Tensor image segmentation: on extracting specific structures]]", Advances in Neural Information Processing Systems (NIPS) 24, December 2010 [also selected for ''Oral spotlight'' 5.9%, Motwani received NIPS 2010 travel award, and additional funds from [[Women in Machine Learning (WIML)|]]]\n\n> Maxwell D. Collins, Vikas Singh, Andrew L. Alexander, "[[Network Connectivity inference over curvature regularizing line graphs]]", Proceedings of Asian Conference on Computer Vision (ACCV), November 2010. [also selected for ''Oral'' presentation, acceptance 4.7%, ''Best Application Paper'' award]\n\n> Lopamudra Mukherjee, Vikas Singh, Jiming Peng, Chris Hinrichs, "[[Learning Kernels for variants of Normalized Cuts: Convex Relaxations and Applications]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2010. [acceptance 27%]\n\n> Vikas Singh, Lopamudra Mukherjee, Jiming Peng, Jinhui Xu, "[[Ensemble Clustering using Semidefinite Programming with applications]]", Machine Learning, Volume 79(1-2), May 2010. [impact factor: 2.326, longer version of NIPS 2007 paper]\n\n> Dorit S. Hochbaum, Vikas Singh, "[[An efficient algorithm for co-segmentation]]", Proceedings of International Conference on Computer Vision (ICCV), October 2009. [also selected for ''Oral'' presentation, acceptance 3.6%]\n\n> Dylan Hower, Vikas Singh, Sterling C. Johnson, "[[Label Set Perturbation for MRF based Neuroimaging Segmentation]]", Proceedings of International Conference on Computer Vision (ICCV), October 2009. [acceptance 19.6%]\n\n> Chris Hinrichs, Vikas Singh, Guofan Xu, Sterling C. Johnson, "[[MKL for robust multi-modal AD classification]]", Proceedings of Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2009. [acceptance 32%]\n\n> Moo K. Chung, Vikas Singh, Peter T. Kim, Kim M. Dalton, Richard J. Davidson, "[[Topological characterization of signal in brain images using the min-max diagram]]", Proceedings of Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2009. [acceptance 32%]\n\n> Chris Hinrichs, Vikas Singh, Lopamudra Mukherjee, Guofan Xu, Moo K. Chung, Sterling C. Johnson, "[[Spatially augmented LP Boosting for AD classification with evaluations on the ADNI dataset]]", Neuroimage, Volume 48(1), October 2009. [impact factor: 5.74]\n\n> Lopamudra Mukherjee, Vikas Singh, Charles R. Dyer, "[[Half-integrality based algorithms for Cosegmentation of Images]]", Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), June 2009. [also selected for ''Oral'' presentation, acceptance 4.1%]\n\n> Lopamudra Mukherjee, Vikas Singh, Jiming Peng, Jinhui Xu, Michael J. Zeitz, Ronald Berezney, "[[Generalized Median Graphs and Applications]]", Journal of Combinatorial Optimization (JCO), Volume 17(1), January 2009. [Longer version of ICCV 2007 paper]\n\n> Vikas Singh, Lopamudra Mukherjee, Petru M. Dinu, Jinhui Xu, Kenneth R. Hoffmann, "[[Limited view CT reconstruction and segmentation via constrained metric labeling]]", Computer Vision and Image Understanding (CVIU), //Special Issue on Discrete Optimization in Computer Vision//, Volume 112(1), October 2008. [Longer version of MMBIA 2007 paper ''invited'' to International Journal of Computer Vision]\n\n> Vikas Singh, Lopamudra Mukherjee, Moo K. Chung, "[[Cortical surface thickness as a classifier: Boosting for autism classification]]", Proceedings of Medical Image Computing and Computer Assisted Intervention (MICCAI), September 2008. \n\n> Vikas Singh, Lopamudra Mukherjee, Jiming Peng, Jinhui Xu, "[[Ensemble Clustering using Semidefinite Programming]]", Advances in Neural Information Processing Systems (NIPS) 20, December 2007 [also selected for ''Oral spotlight'' 10.3%]\n\n> Lopamudra Mukherjee, Vikas Singh, Jiming Peng, Jinhui Xu, Michael J. Zeitz, Ronald Berezney, "[[Generalized Median Graphs: Theory and Applications]]", Proceedings of IEEE International Conference on Computer Vision (ICCV), October 2007. [acceptance: 23.6%]\n\n> Vikas Singh, Lopamudra Mukherjee, Jinhui Xu, Kenneth R. Hoffmann, Petru M. Dinu, Matthew Podgorsak, "[[Brachytherapy Seed Localization using Geometric and Linear Programming Techniques]]", IEEE Transactions on Medical Imaging (TMI), Special Issue on Mathematical Methods in Biomedical Image Analysis, Volume 26(9), September 2007.\n\n> Vikas Singh, Jinhui Xu, Kenneth R. Hoffmann, Guang Xu, Zhenming Chen, Anant Gopal, "[[Towards a theory of a Solution Space for the Biplane Imaging Geometry problem]]", Medical Physics, Volume 33(10), October 2006.\n\n> Lopamudra Mukherjee, Vikas Singh, Jinhui Xu, Kishore Malyavantham, Ronald Berezney, "[[On Mobility Analysis of Functional Sites from Time Lapse microscopic image sequences of living Cell Nucleus]]", Proceedings of Medical Image Computing and Computer Assisted Intervention (MICCAI), October 2006.\n\n> Jinhui Xu, Guang Xu, Zhenming Chen, Vikas Singh, Kenneth R. Hoffmann, "[[Efficient Algorithms for Determining 3-D Bi-Plane Imaging Geometry]]", Journal of Combinatorial Optimization (JCO), Volume 10(2), September 2005.\n\n[[A partial listing of papers on Google Scholar with citation info|]]\n\n