par Iacono, John ;Jacob, Riko;Tsakalidis, Konstantinos
Référence arXiv.org
Publication Publié, 2019-06-01
Référence arXiv.org
Publication Publié, 2019-06-01
Article sans comité de lecture
Résumé : | We present priority queues in the cache-oblivious external memory model with block size B and main memory size M that support on N elements, operation \textsc{UPDATE} (combination of \textsc{INSERT} and \textsc{DECREASEKEY}) in O(1BlogλBNB) amortized I/Os and operations \textsc{EXTRACT-MIN} and \textsc{DELETE} in O(⌈λεBlogλBNB⌉logλBNB) amortized I/Os, using O(NBlogλBNB) blocks, for a user-defined parameter λ∈[2,N] and any real ε∈(0,1). Our result improves upon previous I/O-efficient cache-oblivious and cache-aware priority queues [Chowdhury and Ramachandran, TALG 2018], [Brodal et al., SWAT 2004], [Kumar and Schwabe, SPDP 1996], [Arge et al., SICOMP 2007], [Fadel et al., TCS 1999].We also present buffered repository trees that support on a multi-set of N elements, operation \textsc{INSERT} in O(1BlogλBNB) I/Os and operation \textsc{EXTRACT} on K extracted elements in O(λεBlogλBNB+KB) amortized I/Os, using O(NB) blocks, improving previous cache-aware and cache-oblivious results [Arge et al., SICOMP '07], [Buchsbaum et al., SODA '00].In the cache-oblivious model, for λ=O(E/V), we achieve O(EBlogEVBEB) I/Os for single-source shortest paths, depth-first search and breadth-first search algorithms on massive directed dense graphs (V,E). Our algorithms are I/O-optimal for E/V=Ω(M) (and in the cache-aware setting for λ=O(M)). |